Setting up docker on Azure and deploying sample ASPNET vNext app

This post is just to document my experience running docker on Azure and deploying ASPNET vNext sample HelloWeb app. I’m super excited about the docker support in Azure and ASPNET vNext and I couldn’t wait any longer to try it out. At a high level, these are the steps I took:

  • Set up Docker Client on Virtual machine running Ubuntu desktop version.
  • Created Docker Host VM on Azure
  • Pull down the ASPNET vNext HelloWeb sample app
  • Created the Dockerfile
  • Build the container
  • Run the container
  • Create an endpoint port mapping for TCP port 80

Setting up Docker Client

While docker client can be set up on Windows and Mac OSX machines I decided to set it up as a virtual machine on my Mac with Ubuntu desktop on it.

Things we need to setup on docker client

  • Install node
  • Install Javascript package manager (npm)
  • Install Azure cross platform CLI tools
  • Connect to Azure subscription
  • Install docker client

Since azure cross platform CLI tools are written using nodejs, first thing we’ll need to do is Login to Ubuntu desktop virtual machine and install nodejs by running the command below

sudo apt-get install nodejs-legacy

Next install javascript package manager, run the command below

sudo apt-get install npm

Install Azure cross platform CLI tools, run command below

sudo npm install azure-cli --global

Connect to Azure Subscription

This process is very similar to azure powershell setup on windows machines. Download the publish settings file by running the command below. Command below will launch a browser session where you will need to login with windows live account after which the publish settings file will be downloaded to your local machine.

azure account download

Import the publish settings file

azure account import <publish settings file>

Finally Install Docker client

sudo apt-get install docker.io

Create docker host virtual machine on azure

We are going to use azure vm docker create command to create the docker host virtual machine on Azure. It uses virtual machine extensions feature in azure to install docker once the virtual machine is provisioned. If you are not familiar with virtual machine extensions I highly recommend reading this article. For a list of virtual machine extensions see. One important thing to remember here is that you should create the docker host vm using this command from your docker client, walkthroughs online did not explicitly mention this. The reason for this is one of the things the command does in addition to creating the virtual machine and installing docker on it is that it creates the certificates needed for docker client to be able to talk to docker host. So if you don’t run this command from your docker client machine you may need to do additional steps so the docker client can properly authenticate with docker host.

If you want to see the azure vm docker create command usage simply add –help or -h switch right after azure vm docker create.Before we can create the host vm for docker we need to identify the Linux image to use, run command below

azure vm image list | grep "20150123"

output from the command above should look something like screenshot shown below, you can see the image circled in red

linuximg

Run the command below to create docker host VM on Azure.

azure vm docker create ram-docker-host "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB" rprakashg P@ssw0rd123 -e 22 -l "West US" -u https://rprakashg.blob.core.windows.net -z "Large" -s "a8349281-7715-45c8-ac55-78787752ea7e"

Once the virtual machine is fully provisioned and running you can verify that your docker client can talk to the host by running following command.

docker --tls -H tcp://ram-docker-host.cloudapp.net:4243 info

Now that docker client and host is setup lets look at how we can run sample HelloWeb app published by aspnet team in docker.

Running ASPNET vNext sample app in docker

ASPNET team has published an excellent walkthrough here There is few things I had to do different which I will cover below.

I had some issue cloning the aspnet/home repository per the instructions in the article. I was getting a permission denied error and this had to do with how client machine authenticates with github, so I went down the path of setting up my docker client to talk to github and this really messed things up as docker client could not talk to docker host after I created new ssh key for github. This could be an issue with how I setup, ended up wasting hours :(. Finally I ended up doing an https cloning as shown below. You can find the https clone URL right on the repository page on github.

git clone https://github.com/aspnet/Home.git

Building the container image

On docker client machine running Ubuntu desktop I had to run the build command in elevated privileges using sudo, additionally I had to make sure docker client was talking to host based on a public/default CA pool by adding –tls -H tcp://ram-docker-host.cloudapp.net:4243 (which is my docker host), to pretty much every docker command I was issuing. Not sure why default settings did not work. Definitely need to investigate this further. If you have seen the same issue and have an explanation let me know. Building the container image using the command below took a long time with https v/s default setting which I think is non networked unix socket.

sudo docker --tls -H tcp://ram-docker-host.cloudapp.net:4243  build -t hellowebapp .

If you want to check the status of the above command, simply copy the id string returned by the previous command and run the command below

sudo docker --tls -H tcp://ram-docker-host.cloudapp.net:4243 logs -t <replace with id string>

Running the container

Ran into similar issues here too like build, additionally container just wouldn’t start, encountered “System.FormatException: Value for switch ‘/bin/bash’ is missing” error when I tried to run the container using the command described in article. Worked fine once I started running all docker commands using the –tls –H options.

sudo docker --tls -H tcp://ram-docker-host.cloudapp.net:4243 run -d -t -p 80:5004 hellowebapp

Verify running containers on host

sudo docker --tls -H tcp://ram-docker-host.cloudapp.net:4243 ps -a

Create an endpoint port mapping for TCP port 80

Last step as discussed in the article is to create an endpoint port mapping for TCP port 80 on docker host virtual machine. For name select HTTP from dropdown, protocol should be TCP and enter 80 as value for public port and private port. See screen show below

image

Finally have the sample HelloWeb app successfully running inside docker. Check it out here http://ram-docker-host.cloudapp.net/

Next steps I’m hoping to build a much more real world app in ASPNET vNext and test it out. I’ve already got my Mac machine configured for ASPNET vNext development with Sublime Text.

Huge shout out to Linux team at Microsoft and everyone in the open source community who have contributed writing various tools etc. You can see Microsoft stack is truly becoming more and more open.

Useful links

Docker On Azure

Docker support in ASPNET vNext

Docker

Cheers,

</Ram>

Setting up Azure CLI on a Mac machine and creating Linux virtual machines from terminal

This post is more for me as a reference to steps I took to configure a Mac machine to connect to Azure and be able to create Linux virtual machines. If it helps others then great.

Azure command line tools for Mac and Linux allows us to create/manage virtual machines, web sites and azure mobile services from Mac and Linux desktops. Download and install azure SDK install for Mac here

Connecting to azure subscription

Before you can run operations on your Azure subscription you need to import your subscription, steps to do this are pretty similar to how you setup Azure PowerShell on windows machines

Fire up a new instance of terminal and run following command

azure account download

The above command will launch a browser session and take you to https://windows.azure.com/download/publishprofile.aspx

Save your azure publish settings file locally.

Next execute command below to import your publish settings file

azure account import <file>

If everything went ok you should be good to go run operations on your Azure Subscription. Next we will create the Linux virtual machines, but before we can create the virtual machine we need to create a ssh certificate.

Creating “ssh” certificate

A compatible ssh key must be specified for authentication at the time of creating the Linux virtual machine in Azure. Supported formats for ssh certificates are .cer or .pem. Virtual machine creation will fail If you use any other format.  Run command below in a terminal window to generate a compatible ssh key for authentication.

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout rprakashg.key -out rprakashg.pem

Creating Linux Virtual Machines

We are going to use azure vm create command from a terminal window to create virtual machines. If you want to get help on command usage add –h or –help in the end

The parameters we are going to need to create Linux virtual machine are listed below

-n <virtual machine name>

-u <blob uri>

-z <virtual machine size>

-t <cert> Specify ssh certificate here for authentication.

-l <location> specifies location

-s <subscription id>

-p <password>

-u <username>

Identifying the Ubuntu image to use

azure vm image list | grep "20150123"

20150123 is the date the image is created. See the output from the above command below, you can see the full image name for the ubuntu server you can use when creating the virtual machine.

image

Sample command to create a Linux virtual machine using the ssh certificate we created above

azure vm create "ubuntu-server1" "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB" -l "West US" -u "https://rprakashg.blob.core.windows.net" -z "Medium" -s "a8349281-7715-45c8-ac55-78787752ea7e" -t rprakashg.pem -e 22 -p <password> -u "rprakashg"

Summarizing options for running Microsoft workloads on Google Cloud Platform

When it comes to Infrastructure as a Service (IaaS) both Amazon and Azure IMO dominates this space. Google also has IaaS offering on their cloud platform called Compute Engine. I have been using compute engine with windows images to run workloads such as SharePoint strictly for testing and learning purposes so I can make better recommendations to my customers who are looking to move their Microsoft workloads to cloud.

Currently there is only Windows Server 2008 R2 Datacenter Edition available as Operating System Images for compute engine instances. If you have google cloud platform SDK installed you can run the command below to see a list of windows images

gcloud compute images list --project windows-cloud --no-standard-images

Just like Azure VM when you create a compute engine instance using windows image you have to pass a username and password. You can have passwords stored in a password file and specify it at the time of creating the virtual machine.

Once the compute engine instance is up and running you can remote into it using an RDP client. From a MAC machine i use the Remote Desktop App which is available free from app store. You do need to enable RDP before you can remote into the compute engine instance. If you haven’t seen my post on this see enabling RDP

If you have done a lot of work on Azure IaaS you will find that there is simply no PowerShell support available natively within GCE. I have been running operations on compute engine from powershell using this PowerShell function below. Within my powershell script I build the arguments for gcloud command line utility and call this function.

Function Run-CloudCommand(){

    param(

        [Parameter(Mandatory=$True)]

        [string] $Arguments

    )

 

    $pinfo = New-Object System.Diagnostics.ProcessStartInfo

    $pinfo.FileName = "gcloud.cmd"

    $pinfo.Arguments = $Arguments

    $pinfo.RedirectStandardError = $true

    $pinfo.RedirectStandardOutput = $True

    $pinfo.UseShellExecute = $false

    $pinfo.WorkingDirectory = "c:\program files\google\cloud sdk\google-cloud-sdk\bin"

    

    $p = New-Object System.Diagnostics.Process

    $p.StartInfo = $pinfo

    $p.Start() | Out-Null 

    $p.WaitForExit()

    

    $stdout = $p.StandardOutput.ReadToEnd()

    $stderr = $p.StandardError.ReadToEnd()

 

    if($p.ExitCode -ne 0)

    {

        Write-Error $stderr

    }

    else

    {

      return $stdout

    }

}

I would hope Google is working on PowerShell SDK , this would be a nice open source initiative to work on if Google has no plans. Certainly wont be that hard to write a set of custom powershell commandlets that interacts with REST APIs. Would certainly be nice to know if folks are interested.

Remote powershell on windows based compute engine instances

No native remote powershell support is available like Azure, however you can get remote powershell enabled on compute engine instance, if you haven’t seen my post on this check it out here

What can you run on Google cloud platform

You can run server products listed below on compute engine.

  • MS Exchange Server

  • SharePoint Server
  • SQL Server Standard Edition
  • SQL Server Enterprise Edition
  • Lync Server
  • System Center Server
  • Dynamics CRM Server
  • Dynamics AX Server
  • MS Project Server
  • Visual Studio Deployment
  • Visual Studio Team Foundation Server
  • BizTalk Server
  • Forefront Identity Manager
  • Forefront Unified Access Gateway
  • Remote Desktop Services

For SQL Server the number of license required by compute engine instance is tied to the number of virtual cores, so for ex. if you used machine type n1-standard-2 which has two virtual cores you would need 2 SQL Server standard or enterprise license.

Source for the above info is this article. The article also contains lot of information regarding running Microsoft Software on compute engine and provides additional details on process to go through etc.

I’ve seen in forums and online folks talk about windows based compute engine instances start up slow, but I personally have not felt that way. In fact I found windows instances starting up and shut down faster than Azure VMs

Machine types

Machine type determines the spec for your virtual machine instance such as amount of RAM, number of virtual cores, and persistent disk limits etc. Compute engine has four classes of machine types.

  • Standard machine types

  • High CPU machine types
  • High memory machine types
  • Shared-core machine types

You can have up to 16 persistent disks with total disk size up to 10 TB attached to all machine types except  shared-code machine types

More info on machine types see this article

Pricing

Compute engine is priced cheaper compared to Azure VMs but far less options available for machine types. Couple of interesting things here to know are you are changed a minimum of 10 minutes, what this means is if you started an instance and use it for 5 minutes you are still paying for 10 minutes. Apart from being priced competitively Google also offers sustained use discounts. For more info on pricing check out this article 

If you are interested in trying out Google cloud platform head over to this link and sign up for trial. You can get a $300 credit to use for 6 months.

If you run into any issues and you need support, ask a question on stackoverflow and tag it with google-compute-engine

Cheers,

</Ram>

Creating a base SharePoint 2013 image for Google Compute Engine (GCE)

Incase anyone not familiar with Google Compute Engine, it is the Infrastructure as a Service (IaaS) capability available on Google Cloud Platform. Google now supports Windows Server images, unfortunately only Windows Server 2008 R2 DataCenter edition SP1 is available at the time of writing this article.

Bit of a background for this. I’m testing deploying SharePoint on Google Compute Engine and I needed a base image that has all the required software in it. Additionally there are some steps you have to perform in GCE if you want to be able to assign static IP addresses for your virtual machine. I did not want to keep doing the same thing each time I create a virtual machine in GCE. So this article basically covers how I built a base windows image that contains SharePoint + the configurations required so the virtual machine can have a static IP address assigned. You can follow the same methods to create base windows images that contains software and configuration that your application needs.

First thing create a new virtual machine using the windows server image that is currently available.

gcloud compute --project "ce-playground" instances create "sp-base-image" --zone "us-central1-a" --machine-type "n1-standard-2" --network "sp-farm-net" --metadata "gce-initial-windows-user=rprakashg" "gce-initial-windows-password=P@ssw0rd" --maintenance-policy "MIGRATE" --scopes "https://www.googleapis.com/auth/devstorage.read_only" "https://www.googleapis.com/auth/logging.write" --image "https://www.googleapis.com/compute/v1/projects/windows-cloud/global/images/windows-server-2008-r2-dc-v20150110" --boot-disk-type "pd-standard" --boot-disk-device-name "sp-base-image" --can-ip-forward

if you look at the above command for machine type I used n1-standard-2 and for boot disk I used pd-standard, you have the option to use SSD here if you like. Also –can-ip-forward is used to enable IP routing for virtual machines. Compute Engine does not support assigning static network IP address for virtual machines. you can use a combination of routes and instances –can-ip-forward ability to work around this.

Download the RDP file for the newly created virtual machine and remote into the virtual machine and perform steps below. This is required since we want to assign static network IP address. Reason why we care about this is that the internal network IP addresses are managed by compute engine and can change when you start/stop instances.

Enable Windows Loopback Adapter

Windows Loopback adapter will allow assigning of static IP address to a virtual machine. Follow steps below to enable loopback adapter.

  • Type Device Manager in Start menu
  • From the device manager right click on the virtual machine name and select add legacy hardware
  • Click Next on the welcome screen.
  • Select Install the hardware that I manually select from a list and click Next.
  • Select Network Adapter from the list
  • Select Microsoft from the manufactures list and Microsoft Loopback Adapter from the network adapters list and click next

Add Windows firewall rule that allows ICMP traffic

To support pinging we will add a firewall rule to allow ICMP traffic

  • type Windows Firewall with Advanced Security in start menu
  • Right click on Inbound Rules and select New Rule
  • Select Custom for rule type and click next
  • Keep default settings for Program and click next
  • From the Protocol Type dropdown select ICMPV4 and click next
  • Keep default settings for Scope, Action, Profile
  • Provide a Name and Description for rule and click finish (For the purpose of this post I used ICMP)

Enable IP Forwarding

  • Run regedit
  • Switch to HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > services > Tcpip > Parameters.
  • Set value for IPEnableRouter property to 1 (This enables IP routing for the instance)
  • Click OK

Windows Updates

Next we will set the windows update settings to “download updates but let me chose whether to install them” (By default its set to automatic download and install, we don’t want that as we want to control what updates get installed on virtual machine). Next we will install Microsoft update to get updates for other products such as Office, SharePoint etc. Apply outstanding updates to keep the virtual machine up to date on patches.

Install SharePoint on the Virtual Machine

Next we are going to install SharePoint with all the Pre-Requisites on the virtual machine. This image is going to be a base image for SharePoint 2013 with Service Pack 1. I’ve downloaded this from my MSDN subscription. Depending on your scenario you might choose to use a different version of SharePoint.

Double click on default application, this will bring up a splash screen. Click on Install Software prerequisites

image 

This will bring up the prerequisite installer tool. Click Next, Accept the terms of agreement and install all prerequisites.

image

prerequisites installer tool will reboot once during the install, after the reboot installer will automatically continue and complete. Once the prerequisites are successfully installed, we can go ahead and install SharePoint Server by clicking on the Install SharePoint Server link in splash screen. (note: due to the reboot during install of prerequisites we need to fire up the splash screen again)

Since I’m using SharePoint 2013 with SP1 downloaded from MSDN a valid product key must be entered before I can continue with SharePoint Server install. You can also use evaluation version of SharePoint Server bits as you probably don’t want to use a licensed version in the image. Since this image is going to be a private used strictly by me I can safely use my MSDN product key. If you are going to share the image with others you are going to want to use evaluation version instead of giving away your key.

For Server Type keep Complete selected and click on install now

image

After installation is complete uncheck Run the SharePoint Products Configuration Wizard now. Since we are building a base image we don’t want to do this.

image

I also ran a PowerShell script that does couple of things

  • Turn off unneeded services
  • Apply disable loopback check fix
  • Turn off CRL check

Created a folder named Scripts under C drive and copied some additional PowerShell scripts that automates configuration of SharePoint. Once the Virtual Machine comes up you can simply run the scripts.

At this point we are ready to turn this virtual machine into a base image.

Run command prompt elevated, type gcesysprep and hit enter. (note: Don’t run the standard sysprep utility that we use to sysprep windows images). Gcesysprep utility will terminate the virtual machine instance, we can simply delete the virtual machine without deleting the persistent disk by running following command

gcloud compute instances delete sp2013-image --keep-disks boot

Next we need to create a snapshot out of the root persistent disk. Snapshot allows us to create new persistent disk with the data from the snapshot, additionally you can restore to larger size than what was originally used or even a different type of disk that was originally used. Run following command to create the snapshot of our sp2013-image disk that was syspreped using the gcesysprep utility.

gcloud compute disks snapshot "sp2013-image" --project "ce-playground" --snapshot-names "sp2013-with-sp1-win2008r2sp1"

The above command will return a URI for the snapshot, you’ll want to write this down. Next thing we’ll need to do is create a new persistent disk using the snapshot that we created earlier.

gcloud compute disks create "sp2013withsp1onwin2k8r2sp1" --source-snapshot "https://www.googleapis.com/compute/v1/projects/ce-playground/global/snapshots/sp2013-with-sp1-win2008r2sp1" --project "ce-playground" --zone "us-central1-a"

Next we will create an image using the new persistent disk that was created in previous step

gcloud compute images create "sharepoint-server-2013-sp1" --source-disk "sp2013withsp1onwin2k8r2sp1" --source-disk-zone "us-central1-a" --project "ce-playground"

You can see from the screenshot below that my custom image is now available for me and I can create new virtual machines using this image.

image

You can run following command to see all the metadata associated with your custom image

gcloud compute images describe "sharepoint-server-2013-sp1" --project "ce-playground"

At this point we can create new virtual machines using this image. Once the virtual machine is created you can simply RDP into the virtual machine and run the PowerShell script to Create a new SharePoint Farm or join to an existing SharePoint Farm. This significantly cuts down the time required to get infrastructure up and running in Google Cloud Platform to run SharePoint workload. I plan to do some performance testing. My goal is to compare how Google Compute Engine stack up against Azure IaaS specifically when it comes to running large workloads like SharePoint in the cloud.

Cheers,

</Ram>

Enabling remote PowerShell for windows virtual machines in Google Compute Engine

As you might already know you can now create windows virtual machines in compute engine, with that one of the common scenarios that almost everyone will encounter is being able to run remote powershell commands on the virtual machine, this could be things like automating install/configuration of additional software or components as an example. This post I’m going to cover steps you can take to enable remote powershell.

On the virtual machine

Run windows update, select all options including optional .NET framework 4.5. This will make sure the virtual machine is current on updates.

Once the virtual machine is back online, RDP into the virtual machine and run powershell command below to configure powershell remoting.

Enable-PSRemoting –Force

Virtual network configurations to be done in google cloud console or command line using gcloud utility

We need to create some firewall rules for the virtual network to allow powershell remoting. Ports used by powerShell remoting are TCP/5985 for HTTP and TCP/5986 for https. . I named my rules as shown below table

Rule Name Protocol:Port
allow-powershell-http tcp:5985
allow-powershell-https tcp:5986

On the client machine

Since we are going to be talking to a virtual machine that is in a different domain if you try to do Enter-PSSession –ComputerName {VM public IP} by default this is not going to work. I won’t go too much into detail about this as it’s already described here http://blogs.technet.com/b/heyscriptingguy/archive/2013/11/29/remoting-week-non-domain-remoting.aspx

Run the below powershell to add the public IP of the VM running in compute engine to trusted hosts list. (Keep in mind if you stop VM to save you from getting billed make sure the IP has not changed next time when you bring it back up)

Set-Item -Path WSMan:\localhost\Client\TrustedHosts -Value {replace with public ip of VM} -Concatenate -Force

At this time you should be all set to remote powershell into your virtual machines running on compute engine. Hopefully some day remote powershell support will be baked into windows based virtual machines in compute engine so you won’t have to worry about the manual steps after virtual machine is provisioned and can run your automation powershell scripts directly on virtual machine after it is provisioned successfully and online.

Google is giving away $300 credit for use for 60 days anyone who wants to try out Google Cloud Platform. See more info here https://cloud.google.com/free-trial/. Go sign up and check it out.

Hope that helps,

</Ram>

Enabling remote access for windows virtual machines in Google Compute Engine (GCE)

When you create a windows virtual machine in Google Compute Engine (GCE) and use a virtual network other than the default and you try to remote desktop into the newly created virtual machine you will get an error “remote access to server is not enabled”

image

Reason for this is because there are no firewall rules configured for tcp port 3389. If you go to the virtual network you used when VM was created and look under firewall rules you’ll see nothing. To enable RDP you simply need to create a new firewall rule for tcp port 3389

See below

image

After the firewall rule is created you should be able to RDP into the Virtual Machine

It would be nice if this is done automatically when the VM is created, azure does this already, for ex. when a Linux VM is created TCP port 22 automatically gets added to the endpoints for enabling SSH into the VM

Hope that helps

</Ram>

“ToJson()” extension method

I found myself having to serialize objects into Json string on many occasions that it finally dawned on me that I could just have an extension method ToJson() just like the ToString() method on object. Here is the code for it

Assuming you have json.net library included in your project, create a static class and name it JsonExtensionMethods. Add following using statements

using Newtonsoft.Json;

using Newtonsoft.Json.Serialization;

Add an extension method ToJson to this class. In this method we’ll use JsonSerializerSettings class to specify that we are not interested in null properties, additionally we are going to use “CamelCasePropertyNamesContractResolver” so that the resulting Json string will follow camelcase convention. See code below

You’ll notice a ToJson() method now for all your classes

Hope that helps,

Cheers,

</Ram>

Getting up to speed with Docker

I have to admit I am a bit late to the party, I’ve been hearing a lot about Docker for a couple of months mainly due to the announcements related to “Docker” support in Azure. I never really took time to understand and learn more about it until recently. I’m really glad I took some time to get started with Docker as I feel like Its going to be a very disruptive technology platform and you are going to need to know about it at some point. This post is aimed at anyone completely new to this to quickly get up to speed.

Overview – See this link on Docker web site that gives a pretty good overview, be sure to watch intro video by Solomon Hykes where he explains the problem and how Docker solves it. https://www.docker.com/whatisdocker/ 

Understanding Docker – Excellent article that covers everything you need to understand about Docker https://docs.docker.com/introduction/understanding-docker/

Docker Tutorial

Docker team has put together an excellent tutorial here https://www.docker.com/tryit/

Docker Windows Client

Microsoft has done some work already on getting Docker CLI compiled on a Windows machine. Check out this article that walks through getting Docker CLI compiled on Windows https://ahmetalpbalkan.com/blog/compiling-docker-cli-on-windows/

There is even an ASPNET vNext Docker image on Docker repository. https://registry.hub.docker.com/u/microsoft/aspnet/

Resources related to Docker support on Azure

I’ve compiled a list of links that discuss Docker support on Microsoft Azure.

Hopefully you get excited about this platform as much as I am. In very short time its caught so much attention and grown insanely.

Hope this helps

</Ram>

Anonymous access + Azure AD authentication in Azure Websites or WebRole

By default when you create an ASP.NET web application in Visual Studio and use Organization Authentication, Visual Studio assumes that the entire site is an authenticated site and adds a “<deny users=”?” />” to web.config which causes a redirect to Azure AD for auth page every time you run the web application. Now if you have an anonymously accessible site + secured authenticated areas, there are some additional steps you’ll need to perform and that’s what I will cover in this post

Assuming you have created an ASP.NET web application and used organization auth to hook into your Azure AD tenant for authentication and authorization capabilities for your web site. First thing we’ll need to do is comment out <deny users=”?”> element under authorization element in web.config

Add a new Action method to your “AccountController.cs” file and name it “Signon”, if you don’t like this method name you can change it to something else you prefer. We are going to add some code in this method to generate the Azure AD login URL. I’m going to paste the code that is needed below, just grab it from here.

[ChildActionOnly]

public ActionResult SignOn()

{

    ViewBag.LoginUrl = GetSignInUrl();

    return PartialView("_LoginPartial");

}

private String GetSignInUrl()

{

    WsFederationConfiguration config = FederatedAuthentication.FederationConfiguration.WsFederationConfiguration;

    string replyUrl = Url.Action("SignOnCallBack", "Account", routeValues: null, protocol: Request.Url.Scheme);

    var signInRequest = new SignInRequestMessage(new Uri(config.Issuer), IdentityConfig.Realm, replyUrl);

    signInRequest.SetParameter("wtrealm", IdentityConfig.Realm ?? config.Realm);

    return signInRequest.RequestUrl;

}

Next we’ll modify the “_LoginPartial” partial view, this is generated by default when you created the project using project template. In my example I’m using some bootstrap classes to show a user dropdown menu that contains few menu items, you can completely customize this the way you like for your scenario. Here is what I have for the site I’m working on

<div class="btn-group">

    <a class="btn btn-primary" href="#"><i class="fa fa-user fa-fw"></i> @User.Identity.Name</a>

    <a class="btn btn-primary dropdown-toggle" data-toggle="dropdown" href="#">

        <span class="fa fa-caret-down"></span>

    </a>

    <ul class="dropdown-menu">

        @if (Request.IsAuthenticated)

        {

            <li><a href=@Url.Action("Profile", "Account")><i class="fa fa-pencil fa-fw"></i> Profile</a></li>

            <li class="divider"></li>

            <li><a href=@Url.Action("SignOut", "Account")><i class="fa fa-sign-out fa-fw"></i> Sign Out</a></li>

        }

        else

        {

            <li><a href="@ViewBag.LoginUrl"><i class="fa fa-sign-in fa-fw"></i> Login</a></li>

            <li class="divider"></li>

            <li><a href="#signup-modal" data-toggle="modal"><i class="fa fa-credit-card fa-fw"></i> Sign Up</a></li>

        }

    </ul>

</div>

Lastly we’ll need to modify the layout to load the _loginPartial view, depending on your site branding and where you need to surface the login control, simply add following razor code

@{Html.RenderAction("SignOn", "Account");}

That’s it.

Hope this helps

Thanks,

</Ram>

How I’m using dependency injection with IoC container in Azure WorkerRoles

In this post I will cover how I’m using dependency injection with an IoC container inside of Azure worker roles. I’m assuming you all are familiar with DI and IoC containers but just incase anyone is not at a high level it allows us to build a decoupled system that is testable.

I’m also using Microsoft’s CommonServiceLocator to get to my ninject container and the reason for this is it provides a nice abstraction layer in the event I decided to replace my IoC container from ninject to some thing else, additionally I don’t have to pass around the reference of IKernel via parameter or something where constructor injection is not possible.

Nuget Packages you will need to pull down

You’ll need to pull down following nuget packages, you can run following commands in package manager console

install-package Ninject

Install-package CommonServiceLocator

install-package CommonServiceLocator.NinjectAdapter.Unofficial

First two packages, you can easily tell from name what they are, but the last one “CommonServiceLocator.NinjectAdapter” gives you the ninject adapter for common service locator.

When you create a Azure Worker Role project inside of visual studio, by default you are going to get a class called “WorkerRole” that inherits from “RoleEntryPoint” class defined in “Microsoft.WindowsAzure.ServiceRuntime”. If you open this up you can see that the “WorkerRole” class has some code in overridden methods

  • Run – Any code you put here will run for the life time of role instance
  • OnStart – Role initialization code
  • OnStop – Role cleanup code
  • RunAsync

You can also see that by default Run method is wired to “RunAsync” which is were you are supposed to add your code, any code you want to run for the life time of the worker role. This could be things like poling a Queue for messages that needs to be processed etc.

Lets see how we can get DI with Ninject IoC container hooked into worker roles.

First thing I did is I created an abstract class called “NinjectableWorkerRole” that inherits from “RoleEntryPoint” class in my Common Framework assembly that I use in all Azure Projects. NinjectableWorkerRole also implements an IDisposable interface, this is so we can wire up the clean up logic. In this example basically unbinding all the dependencies that we registered with IoC container.

Then I moved all the default code that was in the “WorkerRole” class, additionally I added following abstract methods

protected abstract void RegisterServices(IKernel kernel);

protected abstract void UnRegisterServices(IKernel kernel);

protected abstract void DoWork();

I also added a private variable to this class IKernel _kernel. Next thing we need to do is add some code to OnStart to initialize Ninject as you can see in code snippet below

//create ninject IoC container

_kernel = new StandardKernel();

 

//Tell common service locator to use Ninject IoC container

ServiceLocator.SetLocatorProvider(() => new NinjectServiceLocator(_kernel));

 

//register services, override this in descendant classes and register dependencies with ninject

RegisterServices(_kernel);

Next thing we need to do is wire up the clean up logic, so to do that I defined a method called “Dispose” that can be overridden in descendant classes. In this method we check to see if disposing flag is true and if _kernel is not null we simply call dispose method on kernel and set the reference to null. See code snippet below

protected virtual void Dispose(bool disposing)

{

    if (disposing)

    {

        if (_kernel != null)

        {

            _kernel.Dispose();

            _kernel = null;

        }

    }

}

Inside of Dispose method on IDisposable interface we simply call the above Dispose method by passing “true” as parameter and call SuppressFinalize method on GC. You are probably familiar with this as its the standard way of implementing IDisposable in C#

Last thing is modify the RunAsync code to wire up call to abstract method DoWork

At this point “NinjectableWorkerRole” class is ready, only thing remaining is modifying the existing WorkerRole class in your WorkerRole project to use “NinjectableWorkerRole” class instead of default out of the box RoleEntryPoint class and override RegisterServices, UnRegisterServices and Do Work method. See sample code below

public class WorkerRole : NinjectableWorkerRole

{

    protected override  void RegisterServices(IKernel kernel)

    {

        kernel.Bind<IHelloWorld>()

            .To<HelloWorld>();

        kernel.Bind<INames>()

            .To<MyName>();

    }

    protected override void UnRegisterServices(IKernel kernel)

    {

        kernel.Unbind<IHelloWorld>();

        kernel.Unbind<INames>();

    }

    protected override void DoWork()

    {

        var hw = ServiceLocator.Current.GetInstance<IHelloWorld>();

        Trace.TraceInformation(hw.Message);

    }

}

<Update 2/28/2015>Added code for NinjectableWorkerRole class</Update>

Hope this helps

</Ram>