Thursday, February 16, 2017

IOT security made easy with Azure IOT Suite

There has been quite a lot of talk about security concerns as it relates to IOT devices and security. Businesses want to make sure that they are not opening themselves up to a ton of risk as they expand their data streams and to increase productivity. Thankfully Azure IOT suite makes securely connecting your IOT devices to the cloud quick and easy.

Before we get too far, its important to explain exactly what IOT security means:

IOT Security

In its simplest form, IOT security is broken down into zones and trust boundaries. Typically the zones are defined as follows:

  • Devices – IOT devices that capture and send data
  • Field Gateway – Devices that can aggregate data to and from devices
  • Cloud Gateways – Endpoint that receives / sends data and commands to Field Gateways or directly do devices
  • Services – The services that “do work” on the data that is received from the IOT devices (via the gateways)
Trust boundaries are where data moves between zones. During this movement across boundaries the data can be subject to Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of privilege commonly called STRIDE. For more information about STRIDE, visit this link). Its important to make sure that your IOT devices and data are secure across all zones and trust boundaries.

IOT Security in Azure

The best way to make sure that your application is secure is by leveraging threat modeling during the design / implementation phase. This forces the design of the application to take security into consideration instead of adding it on at the very end. Just like all of its other products, Microsoft leveraged threat modeling for the Azure IOT components.

I want to go through how Azure handles security across the various trust boundaries.

Device to Field Gateway

IOT Devices and Field gateways are very similar and usually connect in much the same way. Typically the security risks from a device to Field gateway perspective are physical / hardware related. This means its important to keep your IOT devices secure physically from harm and intrusions. While harm can be done while transmitting data to / from them, typically its more physical than digital.

Device / Field Gateway to Cloud Gateway

In Azure IOT, the Cloud gateway is called the IOT Hub. The IOT hub uses time sensitive security tokens to authenticate devices and services to avoid sending keys over the wire. Although this sounds complex, the Azure IOT SK’s provide simple and easy ways of automatically generating tokens. Here is a great walkthrough on how that’s done.

Inside the hub itself you can set up Shared Access policies that allow devices that connect certain abilities inside of azure. if configured correctly, this greatly reduces risk to data tampering since devices do not have have direct access to data stores, etc..

Cloud Gateway to Services

This is by far the easiest thing to secure in an IOT implementation. Azure IOT suite allows users to easily configure connections between the IOT hub and other components like Stream Analytics, Azure Data Lake, SQL Azure, etc. All of these components are only available in your own Azure subscription and should reside inside the same data center to prevent data from potentially being tampered with while in transit (although that’s incredibly unlikely since they are all Microsoft data centers). Since only you can access and configure the components and the data is processed and analyzed in the same data center, security risk is very low.
The important thing to remember about security in the Services zone is that if you have any VPN’s to Azure, then that data is potentially venerable there and the normal VPN security should be taken in order to make sure that no one inside of your network can access the data.



As you can see, IOT security is important,  but it can be easy if you understand the threats it poses and know how to mitigate them. Leveraging Azure IOT suite makes an IOT implementation quick and easy. Please contact Flex Manage to learn more about IoT and how we can ensure your devices and data stay secure.

Thursday, July 14, 2016

Adding a user to a SQL Azure Database

This week i had to create a few new users for a SQL Azure database. I have to do this often, but I can never remember the exact steps that need to be run.

If you've connected to a SQL Azure instance, you'll quickly see that some of the useful UI Options don't exist or work differently. For example, if you want to add a new user, you get a SQL script that just creates the user, but doesn't give them any access to the database:









SQL:


In order to actually give the user access to a database, you need to switch your connection to the database you want to the user to access and run the following command:

CREATE USER [UserName]
FOR LOGIN [UserName]
WITH DEFAULT_SCHEMA = dbo
GO

-- Add user to the database owner role
EXEC sp_addrolemember N'db_owner', N'[UserName]'

GO

You can switch out the default schema and role to whatever fits your needs.

After all of that, the user should work, but if you try to connect from SQL Mgmt Studio, you'll get an error that the user does not have access to the Master database. If this wasn't SQL Azure, we could set the default database and we'd be set, but SQL Azure doesn't support that (yet) so we need have 2 options:

Option 1: Give the user Access to the master database:
This can be done by running the "Create User" script from above but on the master database

Option 2: Pick the specific database from SQL Mgmt Studio.
For this option, when you try a new connection to a SQL server, click the "options" button in the lower right of the screen:

Then type in the name of the database and click Connect:

Done!

Tuesday, July 12, 2016

Restart-AzureWebsite: Your Azure credentials have not been set up

I recently ran into an issue when trying to restart an Azure App Service from PowerShell. The command to do it is Restart-AzureWebsite, but when I attempted to run it, I got the following error:

“Restart-AzureWebsite : Your Azure credentials have not been set up or have expired, please run Add-AzureAccount to set up your Azure credentials”

I ran the Add-AzureAccount command but I continued to get the same error.

After some digging I realized that I had more than 1 Azure subscription and the command was using the wrong one when attempting to restart the website.

I ran the following command to retrieve the subscriptions that are cached on my PC:

Get-AzureSubscription

That returns all of the subscriptions for all of the accounts that you’ve added from Azure. I found the subscription that was correct for the website I was attempting to restart and then ran this command:

Select-AzureSubscription -SubscriptionName "MySubscription"

That set the subscription for the current session to the one that contained the website I needed to restart. From there, everything went smoothly.

Enjoy!

Friday, January 2, 2015

Quick and easy table Audit with code first EF6 and Migrations


For one of my current projects, we had a requirement to create track changes made to two tables in the database for auditing purposes. Initially I thought we could just use the built in SQL change tracking features (here), however we were deploying this application to Azure and as of now, that feature is not available in SQL Azure.
I did some investigating into this and there were a few options:

Option 1: Create my own triggers on these tables and write data to the audit tables when the rows were updated.

This was some what straight forward, but I’m not a fan of triggers and I wanted to keep all of my code in C#/Entity Framework for ease of code management down the road. Also, I didn’t want to have to update the triggers when the table was altered

Option 2: Implement this work around leveraging a Local data cache

While this seems like an easier approach, I wouldn’t be fully in control of what was happening during updates. Also, this approach only tells you what rows have changed and not what the before values where. I needed to be able to tell what any change to the row was and its historical values.

Option 3: Build my own using EF Inheritance and Migrations

Building your own is always the most fun option anyway isn’t it?
I decided that I would use Migrations and the built in inheritance features of EF and override the SaveChanges function to write to my audit tables when needed.
Note: For my project I was using EF Code first with migrations, so if you are not using code first or migrations, this may take some massaging.
There are 3 steps to this
Step 1: Setting up the tables
Before I set up the auditing, I had a single object/table called “Claim” and it was defined as shown:
    public class Claim
    {
        [Key]
        public long ClaimId { get; set; }
        [Required]
        public string Title { get; set; }
        public DateTime? ClaimFiledDate { get; set; }
        [Required]
        public DateTime CreateDateTime { get; set; }
        public DateTime? UpdatedDateTime { get; set; }
       
        ...
    }
The class was pretty straight forward leveraging Data Annotations to enforce rules in the MVC View and in the database when creating migrations. For my auditing table, I wanted all of the same columns including the primary key along with a surrogate primary key for query optimization. I could have just created another table definition with the same columns, but then every time I wanted to make a change to the Claims object, I would have to remember to make the same change to the Audit table.  If we use inheritance and leveraging the “magic” of migrations, we can get there with minimal code.
What I ended up doing was creating an abstract base class with most of the data columns in it and then creating two derived classes, Claim and ClaimAudit. Making it abstract means that EF won’t try to create a table for it when you create migrations.
Here is what they ended up looking like:
    public abstract class ClaimBase
    {
        [DatabaseGenerated(DatabaseGeneratedOption.None)]
        public virtual long ClaimId { get; set; }
        [Required]
        public string Title { get; set; }
       
        ...
    }
Claim:
    public class Claim: ClaimBase
    {
        [DatabaseGenerated(DatabaseGeneratedOption.Identity)]
        public override long ClaimId { get; set; }
        [Required]
        [DatabaseGenerated(DatabaseGeneratedOption.Computed)]
        public DateTime CreateDateTime { get; set; }
        public DateTime? UpdatedDateTime { get; set; }
        public bool? IsDeleted { get; set; }
       
        ...
    }
Claim Audit:
public class ClaimAudit : ClaimBase
{
    [DatabaseGenerated(DatabaseGeneratedOption.Identity)]
    public long ClaimAuditId { get; set; }
    [Required, DatabaseGenerated(DatabaseGeneratedOption.Computed)]
    public DateTime TimeStamp { get; set; }
}
A few things to mention:
  • You will have to mark up the ClaimId primary key in the base class as DatabaseGeneratedOption.None so that you don’t get issues with multiple primary keys
  • You override that markup in the derived class to set to Identity for the Claims table since it is, but you do not override it in the audit table since you will be storing multiple versions of that key in the audit table
  • You create a new surrogate primary key on the audit table for query purposes
Step 2: Update “OnModelCreating”
Event though we’ve set up the inheritance in the code, we need to call out a few explicit things in the OnModelCreating function to enforce our primary keys and inheritance. The MapInheritedProperties and “toTable” function is what allows us to define the columns in the base class and have them created in the derived class. As for the HasKey, I don’t remember exactly why I had to use that, but essentially its reinforcing the primary keys that are defined in the markup.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
    modelBuilder.Entity<ClaimAudit>().HasKey(k => k.ClaimAuditId).Map(m =>
    {
        m.MapInheritedProperties();
        m.ToTable("ClaimAudits");
    });

    modelBuilder.Entity<Claim>().HasKey(k => k.ClaimId);

    base.OnModelCreating(modelBuilder);
}



Step 3: Run Up Migration

Now that we’ve updated the classes, when we create the migration script that creates the audit table, it automatically adds the columns from the Claimbase class:

CreateTable("dbo.ClaimAudits",
c => new
    {
        ClaimAuditId = c.Long(nullable: false, identity: true),
        TimeStamp = c.DateTime(nullable: false, defaultValueSql: "GETUTCDATE()"),
        ClaimId = c.Long(nullable: false),
        Title = c.String(nullable: false)

    })
.PrimaryKey(t => t.ClaimAuditId);



Step 4: Override SaveChanges

Ok, the tables are set up and created. The last step is to actually write the data to the audit table when things change. To do this, I used AutoMapper to map the Claim and ClaimAudit objects and added a row to the ClaimAudit table every time a Claim object changed. note that for new Claim Objects, I had to save them in a list and only add them to the ClaimAudit table once they were successfully added to the DB so that the ClaimID value was properly populated



public override int SaveChanges()
{
    Mapper.CreateMap<Claim, ClaimAudit>();

    var newClaims = new List<Claim>();

    //For each Claim, create and audit record if its an update. else wait until its been saved to the DB so we can get the ClaimId
    foreach (var dbEntityEntry in ChangeTracker.Entries<Claim>().Where(e => e.State != EntityState.Unchanged))
    {
        if (dbEntityEntry.State == EntityState.Added)
        {
            newClaims.Add(dbEntityEntry.Entity);
        }
        else if (dbEntityEntry.State == EntityState.Modified)
        {
            var audit = Mapper.Map<ClaimAudit>(dbEntityEntry.Entity);
            ClaimAudits.Add(audit);
        }
    }

    base.SaveChanges();

    foreach (var newclaim in newClaims)
    {
        var audit = Mapper.Map<ClaimAudit>(newclaim);
        ClaimAudits.Add(audit);
    }
   
    return base.SaveChanges();
}



That’s it Simple and relatively easy Database auditing in Azure with EF 6 and Migrations. Enjoy!

Friday, April 18, 2014

Sitefinity Blank Login Screen

 

One on of my current projects, we’re leveraging the Sitefinity CMS for our front end. Initially the setup went really smoothly and I was off to the races. There is a ton to learn and a lot of the functionality is fairly intuitive. I was super excited about it until I tried to deploy some code updates to our Development/Integration Server.

We had already created a site on the development server directly and I didn’t need any of the content that I had created locally, so I figured it would be as easy as moving my code out to the DEV server. When I did this and went to fire up the site, I got a blank Login screen:

image

We went back and reviewed the windows roles/features that need to be turned on and everything seemed to look fine. I recopied the code over and over with no success.

Finally we did a folder compare between the initially deployed site on the DEV server and the code that I had moved. The only thing noticeable was that the GUID’s in some of the configuration files where different. That’s when I had a “Eureka” moment. I bet the Database that I had stored locally had to be moved as well because the GUID’s in the config files would have to match some data in the database.

I did a backup/restore from my local PC to the dev server and it worked!

Lesson learned: The website and assocaited databases can’t be swapped out ad hoc. There needs to be planning and strategy involved.

Hopefully this saves someone some time.

Enjoy!

Monday, February 10, 2014

Build/Deploy Windows service with TFS

Building and deploying a web service or website via TFS is pretty straight forward, but automatically deploying a windows service or console application takes a b it of setup. I’ve outlined the steps below to get your service deployed.

Step 1: Set up Automated Build and MSDeploy on the Target server.

If you are doing any sort of automated build/deploy I recommend using MSDeploy. You can follow steps 1-3 from a previous post here.

Step 2: Edit the .csproj file

The project file for the app you are trying to deploy needs to be edited so that TFS will create a directory for the output of the project. Otherwise the project will be built, but the assemblies will be placed in the code drop location with no organization to it.

To edit the file, go to the open file dialog and locate the csproj file. Select it but don’t click “Open. Instead click on the dropdown arrow next to the Open button, select “Open with”, and choose the XML (Text) editor as shown:

SNAGHTML37cb4280

Once the file is open, add the following “PropertyGroup” XML to the file above the ItemGroup section: 

  <PropertyGroup Condition="$(TeamBuildOutDir) != '' ">
<OutputPath>$(TeamBuildOutDir)\$(MSBuildProjectName)</OutputPath>
</PropertyGroup>

Step 3: Run the automated Build


At this point you want to run the automated build to verify that everything builds properly and that the output directory is created properly.


Step 4: Create a batch script to deploy the code


This approach is identical to the steps 5-7 outlined in the previous post here (same as above). However, the script will be different. We will still take the drop location as the first parameter, but instead of using the setparameters.xml file, we’ll set the parameters directly in the service call. We’ll also leverage the PreSync and PostSync abilities to uninstall the service before deploying and install the service after deploying. This way we don’t run into issues with the assemblies being in use while we are trying to deploy:

SET _location=###%1%###
SET _location=%_location:"###=%
SET _location=%_location:###"
=%
SET _location=%_location:###=%
rem set _logfile=%_location%DeployWindowsServices.txt
set _msdeploy="C:\Program Files\IIS\Microsoft Web Deploy\msdeploy.exe"

set _serviceName=ACMEMonitorService
set _exeName="ACME.FASMonitor"
set _sourceDir=%_location%\ACME.FASMonitor
set _destDir=ACME_SchedulerJobs\ACME.FASMonitor\

echo Deploying %_serviceName% to %_destDir%

%_msdeploy% -verb:sync -source:dirPath="%_sourceDir%",includeAcls='False' -dest:dirpath="\\%_destServer%\c$\%_destDir%",computerName=%_destServer%,userName=%_user%,password=%_password%,authtype=NTLM -useCheckSum -preSync:runCommand="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe c:\%_destDir%\%_exeName%.exe /u",waitInterval="15000",waitAttempts=1,dontUseCommandExe="false" -postSync:runCommand="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe c:\%_destDir%\%_exeName%.exe",waitInterval="15000",waitAttempts=1,dontUseCommandExe="false"
%_msdeploy% -verb:sync -source:runCommand="net start %_serviceName%",waitInterval="25000",waitAttempts=1 -dest:auto,computerName=%_destServer%,userName=%_user%,password=%_password%,authtype=NTLM

echo Done Deploying %_serviceName% to %_destDir%



Step 5: Run the deployment.


That’s it! with the setup above, the windows service will be built, deployed to the code drop location and then deployed and installed on the destination server.


Other uses


The example above shows how to deploy a windows service, but if you have a console app, it can be done the same way, but without the preSync/postSync actions. You can even take it a step further and run schtasks to disable/enable a task if the windows service/console app is associated with a scheduled task.


Enjoy!

Wednesday, December 18, 2013

Checking Server to DB connectivity easily

We’ve all been there before. We set up our new web or application server with the latest code, update the connection strings with the proper SQL Server instance and password and fire up the application only to see all sorts of SQL connection/login issues.  These issues can be very frustrating to troubleshoot because you are not sure if the Login is incorrect, the db is spelled incorrectly or if there are network issues preventing you from seeing the SQL Server. On windows machines, there is a simple way to validate that the SQL server is accessible from the server.

  • Remote desktop into the server having connection issues
  • Click on “Start” and search for Folder Options. Click on Folder options, go to the View tab and validate that the “Hide extensions for known file types” is unchecked:image
  •  

  • Go to the desktop, right click and select New –> Text Document

image

  • Rename the file connection.udl and accept the rename warning
  • Double click on the udl file and it will open the Data Link window.
  • In the Connection tab, enter the information for the SQL server and click the “test Connection” button:

image

  • Now that you are successful connecting to the server, you can close the window and right click the connection.udl file and chose “Open with”. Select “Notepad” to view the proper connection string.

It’s a simple and effective way to troubleshoot SQL connection issues between servers and the SQL DB.

Enjoy!