Category Archives: Uncategorized

Control your Virtual Machine Spend with Azure Automation

In this post I will cover how I’m using Azure Automation to automate starting and stopping of virtual machines to save you from unnecessary costs. Like many I have some Azure subscriptions that I manage using my Microsoft account and others using my organizational account. Reason why I mention this is because it effects the way you authenticate from your Azure automation runbooks. If you are completely new to Azure automation I suggest you head over to this link before you read the rest of the post.

If you are new to Azure automation you might also want to check this post by Keith Mayer, Keith does a good job walking through various steps involved. If your Azure subscription is linked to a Microsoft account you’ll need to follow steps in there to create and upload a management certificate to Azure portal and also steps to add credential to azure automation. That article how ever doesn’t talk about scenario where you don’t want to use a management certificate to authenticate with your azure subscription but want to use Azure AD and organizational authentication. To use organizational authentication you have to specify “Windows Powershell Credentials” for credential type instead of certificate and enter your organizational account user name and password.  One important point to note here is that you cannot have multi factor authentication enabled for the user account you will be using as “Add-AzureAccount” with –Credential option just doesn’t seem to work when you have MFA enabled.

You can also create a new PowerShell credential using PowerShell below

$user = "{replace with organization account}"
$pw = Read-Host "Enter password" -AsSecureString
$cred = New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList $user, $pw
New-AzureAutomationCredential -AutomationAccountName "{replace}" -Name "{replace}" -Value $cred

At this point you should have set up an new Azure automation account and added a credential setting to assets. Credential setting can be certificate or windows powershell credentials depending on your specific scenario. Next thing we’ll need to do is add another setting named “settingsJSON” to assets. To perform this steps select the automation account in management portal and click on “assets” option and then “add setting” as shown in screen shot below


Next you will see a dialog where you’ll need to specify the type of setting to be added, select add variable.


In the define variable screen select “String” variable type and enter “settingsJSON” for variable name


In the define variable value screen we’ll need to paste JSON shown below


Sample JSON template to be pasted as variable value. Before you use the JSON below you will need to replace the subscriptionID, subscriptionName, credentialName and also the list of virtual machines you want to automate starting and stopping.

     subscriptionID: {replace}’,
     subscriptionName: ‘{replace}’,
     credentialName: ‘{replace with name used when credential was setup}’,
         { name: ‘vm1’, serviceName: ‘vm2’ },
         { name: ‘vm2’, serviceName: ‘vm2’}

Next we need to create a new Azure automation runbook. Click New and select Runbook and quick create, enter “startstopvms” for runbook name and select automation account you want to use and click create.


Select the newly created runbook “startandstopvms” and click on “Author” and paste the script below inside the workflow

        [String] $action

    $json = Get-AutomationVariable -Name "settingsJSON"
    Write-Output "json string : $json"
    $settings = ConvertFrom-JSON $json
    Write-Output "Retrieving Credential : $($settings.credentialName)"
    $psCred = Get-AutomationPSCredential -Name $settings.credentialName
    if($psCred -eq $null) {
        Write-Output "No stored credentials, trying to load certificate for authenticating with subscription"
        $certificate = Get-AutomationCertificate -Name $settings.credentialName 
        Write-Output "Setting azure subscription : $($settings.subscriptionName)"
        Set-AzureSubscription -SubscriptionName $settings.subscriptionName -SubscriptionId $settings.subscriptionID -Certificate $certificate 
    } else {
        Write-Output "organizational authentication, using stored powershell credentials with Add-AzureAccount"
        Add-AzureAccount -Credential $psCred
    Select-AzureSubscription $settings.subscriptionName  
    foreach($virtualMachine in $settings.virtualMachines) {   
        if($action.ToUpper() -eq "START"){ 
            Write-Output "Starting virtual machine : $($"
            Start-AzureVM -Name  $ -ServiceName $virtualMachine.serviceName -ErrorAction SilentlyContinue
        } else {
            Write-Output "Stopping virtual machine : $($"
            Stop-AzureVM -Name $ -ServiceName $virtualMachine.serviceName -Force -ErrorAction SilentlyContinue

At this point you should be able to test the RUN book by clicking on test option to ensure things are working correctly. Passing “start” for action parameter should start all virtual machines specified in the JSON settings template, similarly to stop virtual machines you can simply specify “stop” for action parameter value.

Next we need to create a schedule to start the virtual machines

Select the “startstopvms” runbook we created earlier and click on “schedule” option and then “link to a new schedule”


In the “Add Schedule” screen I’ll name the new schedule “start-virtualmachines” and click next


In the configure schedule screen, since I want this to run daily I’ll select the “daily” option and specify a start time. In my case I want the virtual machines to come up at 9AM every morning


Next we specify the runbook parameter value, in this case it’s the “action” parameter value, since this is a schedule for starting virtual machines, we’ll need to specify “start”


Clicking the “check” button will create the schedule. Similarly you can create a schedule to stop the virtual machines, repeat the steps we performed earlier to create schedule and specify “stop” for action parameter value

That’s it. You should now be able to control unnecessary expenses resulting from leaving your virtual machines on using Azure Automation feature. Obviously azure automation has many other scenarios and use cases.

Hope this helps.


My Initial thoughts on Azure Resource Manager tooling in VisualStudio

I don’t know about you but to me Azure Resource Manager is one of the coolest features, it totally takes all the complexity of provisioning resources within Azure away from us. If you are new to Azure Resource Manager, I highly suggest checking out build and ignite sessions on this topic. This post I wanted to cover some initial thoughts on the Azure Resource Manager (ARM) tooling in VisualStudio, I will continue to add to this as I find new stuff.

Things I like

Project template to create Azure Resoure Group based on templates. When editing ARM templates you get full intellisense, Easily deploy resource group template from VisualStudio as well as using PowerShell.

Things to know

When you first create a Resource Group project, select azure template dialog makes bunch of HTTP calls to get template metadata, if you don’t have an internet connection or your connection dropped, you are going to see error below. It does how ever cache after the metadata is loaded for the first time from server so if your connection drops later or you are mobile and don’t have internet but want to work on the template, you should have no problems.


My Wish List

Today when you create a new resource group project, you start off with a blank template, it would be nice if the tooling provided some sort of hook into the github repository that contains templates created by community. This way you are not starting off with a blank template. If you are not familiar with quick start template take a look at this link Templates you find there are indexed from this github repository You can fork this repository and contribute to the community.

If you started off creating your template inside of VisualStudio and now you want to publish it out to the community templates github repo, you still have to do a lot of additional work such as renaming template json file and parameter json file, as well as creating metadata file etc. before you can commit and send  pull request. I would like to see a way to publish the template directly into the community template github repo from visualstudio, that would be tremendously helpful and avoids template authors having to do a lot of additional work to get it ready for publishing to the github repository.

Biggest Gap I see

Lets say the design of your solution requires you to build bunch of Logic App Connectors (API apps) and a Logic App (Business Process) and you want to create an ARM template to provision the connectors and Logic App together, I’m finding it extremely difficult/painful with the current tooling. You have to deploy connectors individually, then you have to switch to Azure to design the Logic app. After you design you have to then test it. Once I know the Logic app is working correctly I have to take the JSON and come back to VisualStudio and wire it up to ARM template. Its real painful process IMO. Am sure this will get better as Azure App Service gets out of preview.

I’d love to hear what you think of the ARM tooling in VisualStudio. Drop a comment if you have interesting things to share



Summarizing options for running Microsoft workloads on Google Cloud Platform

When it comes to Infrastructure as a Service (IaaS) both Amazon and Azure IMO dominates this space. Google also has IaaS offering on their cloud platform called Compute Engine. I have been using compute engine with windows images to run workloads such as SharePoint strictly for testing and learning purposes so I can make better recommendations to my customers who are looking to move their Microsoft workloads to cloud.

Currently there is only Windows Server 2008 R2 Datacenter Edition available as Operating System Images for compute engine instances. If you have google cloud platform SDK installed you can run the command below to see a list of windows images

gcloud compute images list --project windows-cloud --no-standard-images

Just like Azure VM when you create a compute engine instance using windows image you have to pass a username and password. You can have passwords stored in a password file and specify it at the time of creating the virtual machine.

Once the compute engine instance is up and running you can remote into it using an RDP client. From a MAC machine i use the Remote Desktop App which is available free from app store. You do need to enable RDP before you can remote into the compute engine instance. If you haven’t seen my post on this see enabling RDP

If you have done a lot of work on Azure IaaS you will find that there is simply no PowerShell support available natively within GCE. I have been running operations on compute engine from powershell using this PowerShell function below. Within my powershell script I build the arguments for gcloud command line utility and call this function.

Function Run-CloudCommand(){



        [string] $Arguments



    $pinfo = New-Object System.Diagnostics.ProcessStartInfo

    $pinfo.FileName = "gcloud.cmd"

    $pinfo.Arguments = $Arguments

    $pinfo.RedirectStandardError = $true

    $pinfo.RedirectStandardOutput = $True

    $pinfo.UseShellExecute = $false

    $pinfo.WorkingDirectory = "c:\program files\google\cloud sdk\google-cloud-sdk\bin"


    $p = New-Object System.Diagnostics.Process

    $p.StartInfo = $pinfo

    $p.Start() | Out-Null 



    $stdout = $p.StandardOutput.ReadToEnd()

    $stderr = $p.StandardError.ReadToEnd()


    if($p.ExitCode -ne 0)


        Write-Error $stderr




      return $stdout



I would hope Google is working on PowerShell SDK , this would be a nice open source initiative to work on if Google has no plans. Certainly wont be that hard to write a set of custom powershell commandlets that interacts with REST APIs. Would certainly be nice to know if folks are interested.

Remote powershell on windows based compute engine instances

No native remote powershell support is available like Azure, however you can get remote powershell enabled on compute engine instance, if you haven’t seen my post on this check it out here

What can you run on Google cloud platform

You can run server products listed below on compute engine.

  • MS Exchange Server

  • SharePoint Server
  • SQL Server Standard Edition
  • SQL Server Enterprise Edition
  • Lync Server
  • System Center Server
  • Dynamics CRM Server
  • Dynamics AX Server
  • MS Project Server
  • Visual Studio Deployment
  • Visual Studio Team Foundation Server
  • BizTalk Server
  • Forefront Identity Manager
  • Forefront Unified Access Gateway
  • Remote Desktop Services

For SQL Server the number of license required by compute engine instance is tied to the number of virtual cores, so for ex. if you used machine type n1-standard-2 which has two virtual cores you would need 2 SQL Server standard or enterprise license.

Source for the above info is this article. The article also contains lot of information regarding running Microsoft Software on compute engine and provides additional details on process to go through etc.

I’ve seen in forums and online folks talk about windows based compute engine instances start up slow, but I personally have not felt that way. In fact I found windows instances starting up and shut down faster than Azure VMs

Machine types

Machine type determines the spec for your virtual machine instance such as amount of RAM, number of virtual cores, and persistent disk limits etc. Compute engine has four classes of machine types.

  • Standard machine types

  • High CPU machine types
  • High memory machine types
  • Shared-core machine types

You can have up to 16 persistent disks with total disk size up to 10 TB attached to all machine types except  shared-code machine types

More info on machine types see this article


Compute engine is priced cheaper compared to Azure VMs but far less options available for machine types. Couple of interesting things here to know are you are changed a minimum of 10 minutes, what this means is if you started an instance and use it for 5 minutes you are still paying for 10 minutes. Apart from being priced competitively Google also offers sustained use discounts. For more info on pricing check out this article 

If you are interested in trying out Google cloud platform head over to this link and sign up for trial. You can get a $300 credit to use for 6 months.

If you run into any issues and you need support, ask a question on stackoverflow and tag it with google-compute-engine



Creating a base SharePoint 2013 image for Google Compute Engine (GCE)

Incase anyone not familiar with Google Compute Engine, it is the Infrastructure as a Service (IaaS) capability available on Google Cloud Platform. Google now supports Windows Server images, unfortunately only Windows Server 2008 R2 DataCenter edition SP1 is available at the time of writing this article.

Bit of a background for this. I’m testing deploying SharePoint on Google Compute Engine and I needed a base image that has all the required software in it. Additionally there are some steps you have to perform in GCE if you want to be able to assign static IP addresses for your virtual machine. I did not want to keep doing the same thing each time I create a virtual machine in GCE. So this article basically covers how I built a base windows image that contains SharePoint + the configurations required so the virtual machine can have a static IP address assigned. You can follow the same methods to create base windows images that contains software and configuration that your application needs.

First thing create a new virtual machine using the windows server image that is currently available.

gcloud compute --project "ce-playground" instances create "sp-base-image" --zone "us-central1-a" --machine-type "n1-standard-2" --network "sp-farm-net" --metadata "gce-initial-windows-user=rprakashg" "gce-initial-windows-password=P@ssw0rd" --maintenance-policy "MIGRATE" --scopes "" "" --image "" --boot-disk-type "pd-standard" --boot-disk-device-name "sp-base-image" --can-ip-forward

if you look at the above command for machine type I used n1-standard-2 and for boot disk I used pd-standard, you have the option to use SSD here if you like. Also –can-ip-forward is used to enable IP routing for virtual machines. Compute Engine does not support assigning static network IP address for virtual machines. you can use a combination of routes and instances –can-ip-forward ability to work around this.

Download the RDP file for the newly created virtual machine and remote into the virtual machine and perform steps below. This is required since we want to assign static network IP address. Reason why we care about this is that the internal network IP addresses are managed by compute engine and can change when you start/stop instances.

Enable Windows Loopback Adapter

Windows Loopback adapter will allow assigning of static IP address to a virtual machine. Follow steps below to enable loopback adapter.

  • Type Device Manager in Start menu
  • From the device manager right click on the virtual machine name and select add legacy hardware
  • Click Next on the welcome screen.
  • Select Install the hardware that I manually select from a list and click Next.
  • Select Network Adapter from the list
  • Select Microsoft from the manufactures list and Microsoft Loopback Adapter from the network adapters list and click next

Add Windows firewall rule that allows ICMP traffic

To support pinging we will add a firewall rule to allow ICMP traffic

  • type Windows Firewall with Advanced Security in start menu
  • Right click on Inbound Rules and select New Rule
  • Select Custom for rule type and click next
  • Keep default settings for Program and click next
  • From the Protocol Type dropdown select ICMPV4 and click next
  • Keep default settings for Scope, Action, Profile
  • Provide a Name and Description for rule and click finish (For the purpose of this post I used ICMP)

Enable IP Forwarding

  • Run regedit
  • Switch to HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > services > Tcpip > Parameters.
  • Set value for IPEnableRouter property to 1 (This enables IP routing for the instance)
  • Click OK

Windows Updates

Next we will set the windows update settings to “download updates but let me chose whether to install them” (By default its set to automatic download and install, we don’t want that as we want to control what updates get installed on virtual machine). Next we will install Microsoft update to get updates for other products such as Office, SharePoint etc. Apply outstanding updates to keep the virtual machine up to date on patches.

Install SharePoint on the Virtual Machine

Next we are going to install SharePoint with all the Pre-Requisites on the virtual machine. This image is going to be a base image for SharePoint 2013 with Service Pack 1. I’ve downloaded this from my MSDN subscription. Depending on your scenario you might choose to use a different version of SharePoint.

Double click on default application, this will bring up a splash screen. Click on Install Software prerequisites


This will bring up the prerequisite installer tool. Click Next, Accept the terms of agreement and install all prerequisites.


prerequisites installer tool will reboot once during the install, after the reboot installer will automatically continue and complete. Once the prerequisites are successfully installed, we can go ahead and install SharePoint Server by clicking on the Install SharePoint Server link in splash screen. (note: due to the reboot during install of prerequisites we need to fire up the splash screen again)

Since I’m using SharePoint 2013 with SP1 downloaded from MSDN a valid product key must be entered before I can continue with SharePoint Server install. You can also use evaluation version of SharePoint Server bits as you probably don’t want to use a licensed version in the image. Since this image is going to be a private used strictly by me I can safely use my MSDN product key. If you are going to share the image with others you are going to want to use evaluation version instead of giving away your key.

For Server Type keep Complete selected and click on install now


After installation is complete uncheck Run the SharePoint Products Configuration Wizard now. Since we are building a base image we don’t want to do this.


I also ran a PowerShell script that does couple of things

  • Turn off unneeded services
  • Apply disable loopback check fix
  • Turn off CRL check

Created a folder named Scripts under C drive and copied some additional PowerShell scripts that automates configuration of SharePoint. Once the Virtual Machine comes up you can simply run the scripts.

At this point we are ready to turn this virtual machine into a base image.

Run command prompt elevated, type gcesysprep and hit enter. (note: Don’t run the standard sysprep utility that we use to sysprep windows images). Gcesysprep utility will terminate the virtual machine instance, we can simply delete the virtual machine without deleting the persistent disk by running following command

gcloud compute instances delete sp2013-image --keep-disks boot

Next we need to create a snapshot out of the root persistent disk. Snapshot allows us to create new persistent disk with the data from the snapshot, additionally you can restore to larger size than what was originally used or even a different type of disk that was originally used. Run following command to create the snapshot of our sp2013-image disk that was syspreped using the gcesysprep utility.

gcloud compute disks snapshot "sp2013-image" --project "ce-playground" --snapshot-names "sp2013-with-sp1-win2008r2sp1"

The above command will return a URI for the snapshot, you’ll want to write this down. Next thing we’ll need to do is create a new persistent disk using the snapshot that we created earlier.

gcloud compute disks create "sp2013withsp1onwin2k8r2sp1" --source-snapshot "" --project "ce-playground" --zone "us-central1-a"

Next we will create an image using the new persistent disk that was created in previous step

gcloud compute images create "sharepoint-server-2013-sp1" --source-disk "sp2013withsp1onwin2k8r2sp1" --source-disk-zone "us-central1-a" --project "ce-playground"

You can see from the screenshot below that my custom image is now available for me and I can create new virtual machines using this image.


You can run following command to see all the metadata associated with your custom image

gcloud compute images describe "sharepoint-server-2013-sp1" --project "ce-playground"

At this point we can create new virtual machines using this image. Once the virtual machine is created you can simply RDP into the virtual machine and run the PowerShell script to Create a new SharePoint Farm or join to an existing SharePoint Farm. This significantly cuts down the time required to get infrastructure up and running in Google Cloud Platform to run SharePoint workload. I plan to do some performance testing. My goal is to compare how Google Compute Engine stack up against Azure IaaS specifically when it comes to running large workloads like SharePoint in the cloud.



Enabling remote PowerShell for windows virtual machines in Google Compute Engine

As you might already know you can now create windows virtual machines in compute engine, with that one of the common scenarios that almost everyone will encounter is being able to run remote powershell commands on the virtual machine, this could be things like automating install/configuration of additional software or components as an example. This post I’m going to cover steps you can take to enable remote powershell.

On the virtual machine

Run windows update, select all options including optional .NET framework 4.5. This will make sure the virtual machine is current on updates.

Once the virtual machine is back online, RDP into the virtual machine and run powershell command below to configure powershell remoting.

Enable-PSRemoting –Force

Virtual network configurations to be done in google cloud console or command line using gcloud utility

We need to create some firewall rules for the virtual network to allow powershell remoting. Ports used by powerShell remoting are TCP/5985 for HTTP and TCP/5986 for https. . I named my rules as shown below table

Rule Name Protocol:Port
allow-powershell-http tcp:5985
allow-powershell-https tcp:5986

On the client machine

Since we are going to be talking to a virtual machine that is in a different domain if you try to do Enter-PSSession –ComputerName {VM public IP} by default this is not going to work. I won’t go too much into detail about this as it’s already described here

Run the below powershell to add the public IP of the VM running in compute engine to trusted hosts list. (Keep in mind if you stop VM to save you from getting billed make sure the IP has not changed next time when you bring it back up)

Set-Item -Path WSMan:\localhost\Client\TrustedHosts -Value {replace with public ip of VM} -Concatenate -Force

At this time you should be all set to remote powershell into your virtual machines running on compute engine. Hopefully some day remote powershell support will be baked into windows based virtual machines in compute engine so you won’t have to worry about the manual steps after virtual machine is provisioned and can run your automation powershell scripts directly on virtual machine after it is provisioned successfully and online.

Google is giving away $300 credit for use for 60 days anyone who wants to try out Google Cloud Platform. See more info here Go sign up and check it out.

Hope that helps,


Enabling remote access for windows virtual machines in Google Compute Engine (GCE)

When you create a windows virtual machine in Google Compute Engine (GCE) and use a virtual network other than the default and you try to remote desktop into the newly created virtual machine you will get an error “remote access to server is not enabled”


Reason for this is because there are no firewall rules configured for tcp port 3389. If you go to the virtual network you used when VM was created and look under firewall rules you’ll see nothing. To enable RDP you simply need to create a new firewall rule for tcp port 3389

See below


After the firewall rule is created you should be able to RDP into the Virtual Machine

It would be nice if this is done automatically when the VM is created, azure does this already, for ex. when a Linux VM is created TCP port 22 automatically gets added to the endpoints for enabling SSH into the VM

Hope that helps


“ToJson()” extension method

I found myself having to serialize objects into Json string on many occasions that it finally dawned on me that I could just have an extension method ToJson() just like the ToString() method on object. Here is the code for it

Assuming you have library included in your project, create a static class and name it JsonExtensionMethods. Add following using statements

using Newtonsoft.Json;

using Newtonsoft.Json.Serialization;

Add an extension method ToJson to this class. In this method we’ll use JsonSerializerSettings class to specify that we are not interested in null properties, additionally we are going to use “CamelCasePropertyNamesContractResolver” so that the resulting Json string will follow camelcase convention. See code below

You’ll notice a ToJson() method now for all your classes

Hope that helps,



Anonymous access + Azure AD authentication in Azure Websites or WebRole

By default when you create an ASP.NET web application in Visual Studio and use Organization Authentication, Visual Studio assumes that the entire site is an authenticated site and adds a “<deny users=”?” />” to web.config which causes a redirect to Azure AD for auth page every time you run the web application. Now if you have an anonymously accessible site + secured authenticated areas, there are some additional steps you’ll need to perform and that’s what I will cover in this post

Assuming you have created an ASP.NET web application and used organization auth to hook into your Azure AD tenant for authentication and authorization capabilities for your web site. First thing we’ll need to do is comment out <deny users=”?”> element under authorization element in web.config

Add a new Action method to your “AccountController.cs” file and name it “Signon”, if you don’t like this method name you can change it to something else you prefer. We are going to add some code in this method to generate the Azure AD login URL. I’m going to paste the code that is needed below, just grab it from here.


public ActionResult SignOn()


    ViewBag.LoginUrl = GetSignInUrl();

    return PartialView("_LoginPartial");


private String GetSignInUrl()


    WsFederationConfiguration config = FederatedAuthentication.FederationConfiguration.WsFederationConfiguration;

    string replyUrl = Url.Action("SignOnCallBack", "Account", routeValues: null, protocol: Request.Url.Scheme);

    var signInRequest = new SignInRequestMessage(new Uri(config.Issuer), IdentityConfig.Realm, replyUrl);

    signInRequest.SetParameter("wtrealm", IdentityConfig.Realm ?? config.Realm);

    return signInRequest.RequestUrl;


Next we’ll modify the “_LoginPartial” partial view, this is generated by default when you created the project using project template. In my example I’m using some bootstrap classes to show a user dropdown menu that contains few menu items, you can completely customize this the way you like for your scenario. Here is what I have for the site I’m working on

<div class="btn-group">

    <a class="btn btn-primary" href="#"><i class="fa fa-user fa-fw"></i> @User.Identity.Name</a>

    <a class="btn btn-primary dropdown-toggle" data-toggle="dropdown" href="#">

        <span class="fa fa-caret-down"></span>


    <ul class="dropdown-menu">

        @if (Request.IsAuthenticated)


            <li><a href=@Url.Action("Profile", "Account")><i class="fa fa-pencil fa-fw"></i> Profile</a></li>

            <li class="divider"></li>

            <li><a href=@Url.Action("SignOut", "Account")><i class="fa fa-sign-out fa-fw"></i> Sign Out</a></li>




            <li><a href="@ViewBag.LoginUrl"><i class="fa fa-sign-in fa-fw"></i> Login</a></li>

            <li class="divider"></li>

            <li><a href="#signup-modal" data-toggle="modal"><i class="fa fa-credit-card fa-fw"></i> Sign Up</a></li>




Lastly we’ll need to modify the layout to load the _loginPartial view, depending on your site branding and where you need to surface the login control, simply add following razor code

@{Html.RenderAction("SignOn", "Account");}

That’s it.

Hope this helps



How I’m using dependency injection with IoC container in Azure WorkerRoles

In this post I will cover how I’m using dependency injection with an IoC container inside of Azure worker roles. I’m assuming you all are familiar with DI and IoC containers but just incase anyone is not at a high level it allows us to build a decoupled system that is testable.

I’m also using Microsoft’s CommonServiceLocator to get to my ninject container and the reason for this is it provides a nice abstraction layer in the event I decided to replace my IoC container from ninject to some thing else, additionally I don’t have to pass around the reference of IKernel via parameter or something where constructor injection is not possible.

Nuget Packages you will need to pull down

You’ll need to pull down following nuget packages, you can run following commands in package manager console

install-package Ninject

Install-package CommonServiceLocator

install-package CommonServiceLocator.NinjectAdapter.Unofficial

First two packages, you can easily tell from name what they are, but the last one “CommonServiceLocator.NinjectAdapter” gives you the ninject adapter for common service locator.

When you create a Azure Worker Role project inside of visual studio, by default you are going to get a class called “WorkerRole” that inherits from “RoleEntryPoint” class defined in “Microsoft.WindowsAzure.ServiceRuntime”. If you open this up you can see that the “WorkerRole” class has some code in overridden methods

  • Run – Any code you put here will run for the life time of role instance
  • OnStart – Role initialization code
  • OnStop – Role cleanup code
  • RunAsync

You can also see that by default Run method is wired to “RunAsync” which is were you are supposed to add your code, any code you want to run for the life time of the worker role. This could be things like poling a Queue for messages that needs to be processed etc.

Lets see how we can get DI with Ninject IoC container hooked into worker roles.

First thing I did is I created an abstract class called “NinjectableWorkerRole” that inherits from “RoleEntryPoint” class in my Common Framework assembly that I use in all Azure Projects. NinjectableWorkerRole also implements an IDisposable interface, this is so we can wire up the clean up logic. In this example basically unbinding all the dependencies that we registered with IoC container.

Then I moved all the default code that was in the “WorkerRole” class, additionally I added following abstract methods

protected abstract void RegisterServices(IKernel kernel);

protected abstract void UnRegisterServices(IKernel kernel);

protected abstract void DoWork();

I also added a private variable to this class IKernel _kernel. Next thing we need to do is add some code to OnStart to initialize Ninject as you can see in code snippet below

//create ninject IoC container

_kernel = new StandardKernel();


//Tell common service locator to use Ninject IoC container

ServiceLocator.SetLocatorProvider(() => new NinjectServiceLocator(_kernel));


//register services, override this in descendant classes and register dependencies with ninject


Next thing we need to do is wire up the clean up logic, so to do that I defined a method called “Dispose” that can be overridden in descendant classes. In this method we check to see if disposing flag is true and if _kernel is not null we simply call dispose method on kernel and set the reference to null. See code snippet below

protected virtual void Dispose(bool disposing)


    if (disposing)


        if (_kernel != null)



            _kernel = null;




Inside of Dispose method on IDisposable interface we simply call the above Dispose method by passing “true” as parameter and call SuppressFinalize method on GC. You are probably familiar with this as its the standard way of implementing IDisposable in C#

Last thing is modify the RunAsync code to wire up call to abstract method DoWork

At this point “NinjectableWorkerRole” class is ready, only thing remaining is modifying the existing WorkerRole class in your WorkerRole project to use “NinjectableWorkerRole” class instead of default out of the box RoleEntryPoint class and override RegisterServices, UnRegisterServices and Do Work method. See sample code below

public class WorkerRole : NinjectableWorkerRole


    protected override  void RegisterServices(IKernel kernel)







    protected override void UnRegisterServices(IKernel kernel)





    protected override void DoWork()


        var hw = ServiceLocator.Current.GetInstance<IHelloWorld>();




<Update 2/28/2015>Added code for NinjectableWorkerRole class</Update>

Hope this helps


Enabling Web API on an existing MVC project

If you are in a situation where you started off with an MVC project and realize later you need to add some APIs because you are doing single page apps or what ever, this post I will cover steps you can take to enable Web API on an existing MVC project.

In Nuget Package Manager Console, make sure the MVC project is selected in default project dropdown list and run the command below

Install-Package Microsoft.AspNet.WebApi

Right click on your “App_Start” folder and add a new class and name it “WebApiConfig”

Add following using statement

using System.Web.Http

and add a new method called “Register” that takes a parameter named “config” which is of type HttpConfiguration as shown below

public static void Register(HttpConfiguration config)


    // Web API configuration and services


    // Web API routes




        name: "DefaultApi",

        routeTemplate: "api/{controller}/{id}",

        defaults: new { id = RouteParameter.Optional }



next we need to edit code in “global.asax” add following using statement

using System.Web.Http

Add a call to GlobalConfiguration.Configure passing the Register method that we defined earlier in “WebApiConfig” class as shown below


Add a folder “api” to your MVC project and under that create another folder called “Controllers”, this is where you can add all your api controllers

Update: 01/30/2015: One thing I forgot to mention is if you are using Ninject for dependency injection, don’t include ninject.web.webapi nuget package, at the time of writing this there seem to be some issue with this package. Adding this line System.Web.Http.GlobalConfiguration.Configuration.DependencyResolver = new NinjectDependencyResolver(kernel); after RegisterServices was causing below exception

Error activating ModelValidatorProvider using binding from ModelValidatorProvider to NinjectDefaultModelValidatorProvider
A cyclical dependency was detected between the constructors of two services.

follow this article by Peter and roll your own dependency resolver, this seem to work fine.

Enabling Web API Help Pages

For those who don’t know what it is, Web API help pages generates pretty nice documentation for your API. Assuming you enabled Web API on an existing MVC project, next thing you’ll likely want to do is get Web API help pages. To enable Web API help pages you simply need to pull down “Microsoft.AspNet.WebApi.HelpPage” package from nuget, you can go to nuget package manager console and run following command “Install-Package Microsoft.AspNet.WebApi.HelpPage”

Hope that helps