Control your Virtual Machine Spend with Azure Automation

In this post I will cover how I’m using Azure Automation to automate starting and stopping of virtual machines to save you from unnecessary costs. Like many I have some Azure subscriptions that I manage using my Microsoft account and others using my organizational account. Reason why I mention this is because it effects the way you authenticate from your Azure automation runbooks. If you are completely new to Azure automation I suggest you head over to this link before you read the rest of the post.

If you are new to Azure automation you might also want to check this post by Keith Mayer, Keith does a good job walking through various steps involved. If your Azure subscription is linked to a Microsoft account you’ll need to follow steps in there to create and upload a management certificate to Azure portal and also steps to add credential to azure automation. That article how ever doesn’t talk about scenario where you don’t want to use a management certificate to authenticate with your azure subscription but want to use Azure AD and organizational authentication. To use organizational authentication you have to specify “Windows Powershell Credentials” for credential type instead of certificate and enter your organizational account user name and password.  One important point to note here is that you cannot have multi factor authentication enabled for the user account you will be using as “Add-AzureAccount” with –Credential option just doesn’t seem to work when you have MFA enabled.

You can also create a new PowerShell credential using PowerShell below

$user = "{replace with organization account}"
$pw = Read-Host "Enter password" -AsSecureString
$cred = New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList $user, $pw
New-AzureAutomationCredential -AutomationAccountName "{replace}" -Name "{replace}" -Value $cred

At this point you should have set up an new Azure automation account and added a credential setting to assets. Credential setting can be certificate or windows powershell credentials depending on your specific scenario. Next thing we’ll need to do is add another setting named “settingsJSON” to assets. To perform this steps select the automation account in management portal and click on “assets” option and then “add setting” as shown in screen shot below


Next you will see a dialog where you’ll need to specify the type of setting to be added, select add variable.


In the define variable screen select “String” variable type and enter “settingsJSON” for variable name


In the define variable value screen we’ll need to paste JSON shown below


Sample JSON template to be pasted as variable value. Before you use the JSON below you will need to replace the subscriptionID, subscriptionName, credentialName and also the list of virtual machines you want to automate starting and stopping.

     subscriptionID: {replace}’,
     subscriptionName: ‘{replace}’,
     credentialName: ‘{replace with name used when credential was setup}’,
         { name: ‘vm1′, serviceName: ‘vm2′ },
         { name: ‘vm2′, serviceName: ‘vm2′}

Next we need to create a new Azure automation runbook. Click New and select Runbook and quick create, enter “startstopvms” for runbook name and select automation account you want to use and click create.


Select the newly created runbook “startandstopvms” and click on “Author” and paste the script below inside the workflow

        [String] $action

    $json = Get-AutomationVariable -Name "settingsJSON"
    Write-Output "json string : $json"
    $settings = ConvertFrom-JSON $json
    Write-Output "Retrieving Credential : $($settings.credentialName)"
    $psCred = Get-AutomationPSCredential -Name $settings.credentialName
    if($psCred -eq $null) {
        Write-Output "No stored credentials, trying to load certificate for authenticating with subscription"
        $certificate = Get-AutomationCertificate -Name $settings.credentialName 
        Write-Output "Setting azure subscription : $($settings.subscriptionName)"
        Set-AzureSubscription -SubscriptionName $settings.subscriptionName -SubscriptionId $settings.subscriptionID -Certificate $certificate 
    } else {
        Write-Output "organizational authentication, using stored powershell credentials with Add-AzureAccount"
        Add-AzureAccount -Credential $psCred
    Select-AzureSubscription $settings.subscriptionName  
    foreach($virtualMachine in $settings.virtualMachines) {   
        if($action.ToUpper() -eq "START"){ 
            Write-Output "Starting virtual machine : $($"
            Start-AzureVM -Name  $ -ServiceName $virtualMachine.serviceName -ErrorAction SilentlyContinue
        } else {
            Write-Output "Stopping virtual machine : $($"
            Stop-AzureVM -Name $ -ServiceName $virtualMachine.serviceName -Force -ErrorAction SilentlyContinue

At this point you should be able to test the RUN book by clicking on test option to ensure things are working correctly. Passing “start” for action parameter should start all virtual machines specified in the JSON settings template, similarly to stop virtual machines you can simply specify “stop” for action parameter value.

Next we need to create a schedule to start the virtual machines

Select the “startstopvms” runbook we created earlier and click on “schedule” option and then “link to a new schedule”


In the “Add Schedule” screen I’ll name the new schedule “start-virtualmachines” and click next


In the configure schedule screen, since I want this to run daily I’ll select the “daily” option and specify a start time. In my case I want the virtual machines to come up at 9AM every morning


Next we specify the runbook parameter value, in this case it’s the “action” parameter value, since this is a schedule for starting virtual machines, we’ll need to specify “start”


Clicking the “check” button will create the schedule. Similarly you can create a schedule to stop the virtual machines, repeat the steps we performed earlier to create schedule and specify “stop” for action parameter value

That’s it. You should now be able to control unnecessary expenses resulting from leaving your virtual machines on using Azure Automation feature. Obviously azure automation has many other scenarios and use cases.

Hope this helps.


How to parameterize tokens in URL for web service requests in VisualStudio webtest

While back I had blogged about automating testing of your REST API using VisualStudio webtest and one of the questions I got was how do you parameterize tokens in URL for REST API endpoints. If you have not seen that post see this link. Let’s take a look at our northwind API as an example.

If you want to retrieve a single customer entity and the REST API endpoint looks something like this{customerID}

By default if you add a web service request to your webtest you will need to hardcode the “customerID” portion in the above API endpoint. This may not be necessarily bad if those IDs never change additionally if you have multiple web service requests added in your webtests for different scenarios, if you hard code the ID in URL and when the IDs change you will have to make sure you go and update all the URLs for each web service requests in your webtest which can quickly become a nightmare. So lets see how we can solve that problem.

Approach 1:

Add a context parameter to your webtest called “customerIDParam” and set the value to a customer ID that you will use as test data for the webtest. Follow steps to add context parameter.

Assuming the webtest is open in VisualStudio, right click on context parameters node and select “add context parameter”


enter name you want to use for the parameter and for value enter the customer ID you will use as test data.

Once the context parameter is setup, you can now use this in the URL for the web service request as shown below. In the highlighted request you can see the API host portion and customer ID portion using context parameters.


Approach 2:

This approach we are going to use some custom code to get the parameters for the webtest. First thing we’ll need to do is add a data source to the webtest project, datasource will contain the the test data that we want to use, in this case api host and customer ID values. Next thing we’ll need to do is convert the webtest into a coded webtest. One thing to keep in mind here is that the VisualStudio designer doesn’t allow binding URLs to datasource, it would have been nice if we could do that as that would eliminate having to write custom code.

To convert the webtest into a coded webtest, assuming you’ve opened the webtest inside of VisualStudio, click on “Generate Code” toolbar option as shown in screen capture below


You could also just simply add a class and inherit from “WebTest” class defined in “Microsoft.VisualStudio.TestTools.WebTesting” namespace. I’m not sure why there is no option to added coded web test within VisualStudio that would automatically do this.

Assuming you have the coded webtest class added to the project we can get apihost and customer id values from the datasource by adding below code to the coded webtest and use it when we construct the “WebTestRequest”


Hope that helps


My Initial thoughts on Azure Resource Manager tooling in VisualStudio

I don’t know about you but to me Azure Resource Manager is one of the coolest features, it totally takes all the complexity of provisioning resources within Azure away from us. If you are new to Azure Resource Manager, I highly suggest checking out build and ignite sessions on this topic. This post I wanted to cover some initial thoughts on the Azure Resource Manager (ARM) tooling in VisualStudio, I will continue to add to this as I find new stuff.

Things I like

Project template to create Azure Resoure Group based on templates. When editing ARM templates you get full intellisense, Easily deploy resource group template from VisualStudio as well as using PowerShell.

Things to know

When you first create a Resource Group project, select azure template dialog makes bunch of HTTP calls to get template metadata, if you don’t have an internet connection or your connection dropped, you are going to see error below. It does how ever cache after the metadata is loaded for the first time from server so if your connection drops later or you are mobile and don’t have internet but want to work on the template, you should have no problems.


My Wish List

Today when you create a new resource group project, you start off with a blank template, it would be nice if the tooling provided some sort of hook into the github repository that contains templates created by community. This way you are not starting off with a blank template. If you are not familiar with quick start template take a look at this link Templates you find there are indexed from this github repository You can fork this repository and contribute to the community.

If you started off creating your template inside of VisualStudio and now you want to publish it out to the community templates github repo, you still have to do a lot of additional work such as renaming template json file and parameter json file, as well as creating metadata file etc. before you can commit and send  pull request. I would like to see a way to publish the template directly into the community template github repo from visualstudio, that would be tremendously helpful and avoids template authors having to do a lot of additional work to get it ready for publishing to the github repository.

Biggest Gap I see

Lets say the design of your solution requires you to build bunch of Logic App Connectors (API apps) and a Logic App (Business Process) and you want to create an ARM template to provision the connectors and Logic App together, I’m finding it extremely difficult/painful with the current tooling. You have to deploy connectors individually, then you have to switch to Azure to design the Logic app. After you design you have to then test it. Once I know the Logic app is working correctly I have to take the JSON and come back to VisualStudio and wire it up to ARM template. Its real painful process IMO. Am sure this will get better as Azure App Service gets out of preview.

I’d love to hear what you think of the ARM tooling in VisualStudio. Drop a comment if you have interesting things to share



Enabling Organizational Authentication on an existing MVC project

If you have been working with MVC applications should already know that when you first create the project, you sort of have to make call on how the application is going to authenticate users, options are no authentication, Individual Accounts which essentially means users are going to be stored in a database and template auto generates all the plumbing code for authentication, user management etc. and lastly Organizational accounts. This is where you basically outsource all authentication function to external authentication system, typically a federated identity management system. But what if you started off with individual accounts and later you decide to move to organizational accounts, if you have run into this scenario its not quite easy to swap. I wish there was an easier way in VisualStudio. Goal with this post is to to document the steps you can take to swap the authentication mechanism to organizational accounts.

Nuget Packages to be Uninstalled

In package manager console run commands in the order listed below to uninstall all Owin nuget packages that was pulled down by the project template

Uninstall-Package Microsoft.Owin.Security.Twitter
Uninstall-Package Microsoft.AspNet.Identity.Owin
Uninstall-Package Microsoft.Owin.Security.OAuth
Uninstall-Package Microsoft.Owin.Security.MicrosoftAccount
Uninstall-Package Microsoft.Owin.Security.Google
Uninstall-Package Microsoft.Owin.Security.Facebook
Uninstall-Package Microsoft.Owin.Security.Cookies
Uninstall-Package Microsoft.Owin.Security
Uninstall-Package Microsoft.Owin.Host.SystemWeb
Uninstall-Package Microsoft.Owin
Uninstall-Package Owin

Since Application insights packages are not pulled down when you select organization authentication when a new ASP.NET application is created we will  go ahead and uninstall those packages as well. I did not test to see if there are any issues with enabling Application Insights on ASP.NET web application configured for Organization Authentication. May be that’s for another day Smile

In package manager console run following commands in order listed below

Uninstall-Package Microsoft.ApplicationInsights.Web
Uninstall-Package Microsoft.ApplicationInsights.Web.TelemetryChannel
Uninstall-Package Microsoft.ApplicationInsights.PerfCounterCollector
Uninstall-Package Microsoft.ApplicationInsights.JavaScript
Uninstall-Package Microsoft.ApplicationInsights.DependencyCollector
Uninstall-Package Microsoft.ApplicationInsights.Agent.Intercept
Uninstall-Package Microsoft.ApplicationInsights

Nuget Packages to be installed

Next we need to Install System.IdentityModel.Tokens.ValidatingIssuerNameRegistry, this package adds token validation capabilities and ensures that the signer and issuer are a valid pair.

In package manager console run below command

Install-Package System.IdentityModel.Tokens.ValidatingIssuerNameRegistry

Assembly references to be added

Update Project references and add following assembly references listed below

  • System.IdentityModel
  • System.IdentityModel.Services

EntityFramework wireup to push keys and tenant IDs into database

Next we need to add entity framework classes which enables saving of keys and tenant ids into a database after token validation process is complete during the organization authentication process. Follow steps below.

  • Right click on “Models” folder and add a new class and name it “TenantRegistrationModels.cs”
  • Replace using statements in TenantRegistrationModels.cs file with below:
 using System;
 using System.Collections.Generic;
 using System.Linq;
  • Add following model classes to “TenantRegistrationModels.cs” file. You can break this into individual files if you prefer that.
public class IssuingAuthorityKey
	public string Id { get; set; }

public class Tenant
	public string Id { get; set; }
  • Right click on “Models” folder in your existing MVC project and add a new class and name it “TenantDbContext.cs”
  • Replace using statements in “TenantDbContext.cs” file with below:
 using System;
 using System.Data.Entity;
  • Replace the contents of the “TenantDbContext” class with code below
 public class TenantDbContext : DbContext
         public TenantDbContext()
             : base("DefaultConnection")

        public DbSet IssuingAuthorityKeys { get; set; }

        public DbSet Tenants { get; set; }
  • Delete files below that are stored under “Models” folder in your existing MVC project as we no longer need them since we are switching to Organization Authentication model from Individual user accounts.
    • AccountViewModels.cs
    • IdentityModels.cs
    • ManageViewModels.cs

Next we need to add a folder to existing project and name it “Utils”

Right click on this folder and add a class and name it “DatabaseIssuerNameRegistry.cs”

Replace using statements in “DatabaseIssuerNameRegistry.cs” file with following

 using System;
using System.Collections.Generic;
using System.IdentityModel.Tokens;
using System.Linq;
using System.Runtime.CompilerServices;
using System.Web;
using System.Web.Hosting;
using System.Xml.Linq;

Additionally you have to include another using statement above to bring in the namespace for your “Models”.

Replace the “DatabaseIssuerNameRegistry” class with code below

 public class DatabaseIssuerNameRegistry : ValidatingIssuerNameRegistry
         public static bool ContainsTenant(string tenantId)
             using (TenantDbContext context = new TenantDbContext())
                 return context.Tenants
                     .Where(tenant => tenant.Id == tenantId)

        public static bool ContainsKey(string thumbprint)
             using (TenantDbContext context = new TenantDbContext())
                 return context.IssuingAuthorityKeys
                     .Where(key => key.Id == thumbprint)

        public static void RefreshKeys(string metadataLocation)
             IssuingAuthority issuingAuthority = ValidatingIssuerNameRegistry.GetIssuingAuthority(metadataLocation);

            bool newKeys = false;
             bool refreshTenant = false;
             foreach (string thumbprint in issuingAuthority.Thumbprints)
                 if (!ContainsKey(thumbprint))
                     newKeys = true;
                     refreshTenant = true;

            foreach (string issuer in issuingAuthority.Issuers)
                 if (!ContainsTenant(GetIssuerId(issuer)))
                     refreshTenant = true;

            if (newKeys || refreshTenant)
                 using (TenantDbContext context = new TenantDbContext())
                     if (newKeys)
                         foreach (string thumbprint in issuingAuthority.Thumbprints)
                             context.IssuingAuthorityKeys.Add(new IssuingAuthorityKey { Id = thumbprint });

                    if (refreshTenant)
                         foreach (string issuer in issuingAuthority.Issuers)
                             string issuerId = GetIssuerId(issuer);
                             if (!ContainsTenant(issuerId))
                                 context.Tenants.Add(new Tenant { Id = issuerId });

        private static string GetIssuerId(string issuer)
             return issuer.TrimEnd('/').Split('/').Last();

        protected override bool IsThumbprintValid(string thumbprint, string issuer)
             return ContainsTenant(GetIssuerId(issuer))
                 && ContainsKey(thumbprint);

Updates to App_Start

Make following Changes to App_Start folder

  • Delete “Startup.Auth.cs” file from App_Start folder as we no longer need to wire up authentication to Owin middleware
  • Edit the “IdentityConfig.cs” file and make changes below
    • Replace using statements with below
using System;
 using System.Collections.Generic;
 using System.Configuration;
 using System.IdentityModel.Claims;
 using System.IdentityModel.Services;
 using System.Linq;
 using System.Web.Helpers;
 using WebApplication3.Utils;
    • Replace everything inside the namespace statement with code below
public static class IdentityConfig
         public static string AudienceUri { get; private set; }
         public static string Realm { get; private set; }

        public static void ConfigureIdentity()
             // Set the realm for the application
             Realm = ConfigurationManager.AppSettings["ida:realm"];

            // Set the audienceUri for the application
             AudienceUri = ConfigurationManager.AppSettings["ida:AudienceUri"];
             if (!String.IsNullOrEmpty(AudienceUri))

            AntiForgeryConfig.UniqueClaimTypeIdentifier = ClaimTypes.Name;

        public static void RefreshValidationSettings()
             string metadataLocation = ConfigurationManager.AppSettings["ida:FederationMetadataLocation"];

        public static void UpdateAudienceUri()
             int count = FederatedAuthentication.FederationConfiguration.IdentityConfiguration
                     uri => String.Equals(uri.OriginalString, AudienceUri, StringComparison.OrdinalIgnoreCase));
             if (count == 0)
                     .AudienceRestriction.AllowedAudienceUris.Add(new Uri(IdentityConfig.AudienceUri));

Make following changes to files in “Controllers” folder

  • Delete “ManageController.cs” file. If you want to provide user management functions you’ll probably want to keep this controller and update code to do those operations against Azure AD using Graph API

Make following changes to “Global.asax.cs” by right clicking on “Global.asax” file and selecting view code option.

  • Add below code to using statements

using System.IdentityModel.Services;

  • Update code in “Application_Start” event and add a call to ConfigureIdentity method in IdentityConfig class like below:
    • IdentityConfig.ConfigureIdentity();
  • Add a new event handler shown below to hook into RedirectingToIdentityProvider event on WSFederationAuthenticationModule
private void WSFederationAuthenticationModule_RedirectingToIdentityProvider(object sender, RedirectingToIdentityProviderEventArgs e)
             if (!String.IsNullOrEmpty(IdentityConfig.Realm))
                 e.SignInRequestMessage.Realm = IdentityConfig.Realm;

View Updates

Add a new view under “Account” folder and name it “SignOutCallback.cshtml” and add following snippet below. You can also add your own fancy custom message you want to show when users signs out of the application.

<p class=”text-success”>You have successfully signed out.</p>

Web.Config Updates

We’ll need to apply some updates to the web.config for your existing MVC web site. Changes that needed to be made are listed below:

  • Under “configSections” element add snippet below
  • Add snippet below right under closing element for system.web

  • Add section for right under “entityFramework” element as shown below
  • Add following keys to “appSettings”


  • Add authorization element under system.web to deny anonymous users access. See below:
  • Add following snippet right after “appSettings” element
  • Replace contents of “modules” element under “system.webServer” element with snippet below

Additional Clean Up

You may also want to replace the contents of “_LoginPartial.cshtml” view with snippet below especially if you have the original auto generated code, you may still have requests going to “Accounts” controller methods that no longer exists or needed

@if (Request.IsAuthenticated)
     <ul class=”nav navbar-nav navbar-right”>
         <li class=”navbar-text”>
             Hello, @User.Identity.Name!
             @Html.ActionLink(“Sign out”, “SignOut”, “Account”)
     <ul class=”nav navbar-nav navbar-right”>
         <li>@Html.ActionLink(“Sign in”, “Index”, “Home”, routeValues: null, htmlAttributes: new { id = “loginLink” })</li>

If you decided to delete “ManageController” controller per earlier step, we no longer needed to keep any views that are related to account management activities, so I would recommend deleting them. Additionally all the auto generated views in Account folder could also be removed. You should just have one here which is “SignOutCallback.cshtml” that display user a message when he/she signs out of application.

Delete the “Startup.cs” file from existing MVC project.

Add the Application to Azure AD

Lastly add the application in Azure Active Directory. I won’t go into the details about that as you might already be familiar with it.

Last thing I want to mention is if you started of with “No Authentication” option steps should be 80-90% same as above, couple of additional stuff you’ll need to do is bringing in EntityFramework, Microsoft.AspNet.Identity.Core and Microsoft.AspNet.Identity.EntityFramework packages

Hope this helps.



Azure Api App – Continuous deployment to Azure from local git repository

You may or may not have seen this article that talks about continuous deployment to azure for app service. The thing I want to point out is that the article covers within the context of a web app. You can pretty much do the same thing for API apps as well. If you just looked at the properties of the API app it may not be quite clear. UI is not quite intuitive at this point, hopefully by GA all these will be sorted out. This post I will cover how to configure continuous deployment for API apps.

Once you develop the API app in visual studio, you first need to publish it. If you are new to API app, have a look at Brady Gaster’s article on After you have published the Api app to azure, select the Api app in azure portal and click on API app host link (see screen shot below). API app host is basically an Azure web app, were we will need to follow the same steps described in the article for continuous deployment that I shared earlier. If you haven’t configured continuous deployment for web apps I suggest you read that article before also there is plenty of walkthrough’s online published by various folks within azure team.


For the purposes of this article I’ve gone and configured continuous deployment for the api host web app for twitterinsights api app. Currently the API looks like below


You can also see from the screen shot below that I have connected the api host web app to local git repository and set deployment credentials as well as provided hook for my local git repository (remote git URL for api host web app)


Next I will go ahead and update the api and commit to my local git repository, this is just to illustrate the continuous deployment is working correctly. For demonstration purposes I updated the route as shown below


Next we will need to push the changes to remote repository by running command below, again this is clearly described in the continuous deployment article shared earlier.


You can see from the screenshot below that changes were successfully deployed


Finally just to prove that there is no smoke and mirrors you can see in the swagger UI below that my changes are indeed pushed to azure.


Hope that helps, one final word is you can use all the supported source control repositories it doesn’t just have to be local git repository. Hope you are just excited as I am about Azure App Service and the possibilities it brings for developers.



Error connecting to Azure Subscription from VisualStudio

I wanted to make this post with the hope of saving others time. Last two days I have been troubleshooting issue connecting to my Azure Dev/Test subscription. I had developed a custom resource group template and was getting ready to test the template by doing a sample deployment to my azure subscription. Unfortunately I simply could not get past the sign-in in “deploy to resource group” dialog in Visual Studio 2015. Since I had multiple subscriptions on my laptop as most of us do when I pick the Microsoft account from drop down in this case “” I would get the visual studio sign in page where I enter “”, STS would then redirect me to live and I’m able to login there and would come back to the “deploy to resource group” dialog but all the fields remained read only. I had no idea why the fields were still read only. Unfortunately what had happened was my sign-in process had failed (not that I used invalid login or anything), it would have been helpful if VisualStudio had shown some error message to me, additionally there was nothing visually in the dialog that would have given me some indication that there was some problem with sign in, as you can see from the screen shot below.


So I popped back to Visual Studio 2013 and opened the same resource group template project and tried to deploy and here is what I found. In the deploy to resource group dialog looks like below in VS 2013, when I tried to sign in to my azure Dev/Test subscription I basically got the same behavior as VS 2015 but the difference is I could visually see that I was not signed in even though I successfully signed in to live. If the sign in was successful the button text changes to Sign out and all the fields become enabled.


Either way there should be some sort of error message shown to user when there is a problem with sign in. At this point I needed to find out what is really happening so I tried creating a new ASP.NET project in Visual Studio 2015 and selected “Host in the Cloud” Project was successfully created but no cloud resources were provisioned in my azure subscription and there was no error messages as well, VisualStudio 2015 had decided to fail gracefully. I attempted to do the same thing in Visual Studio 2013 and right after click ok in project creation dialog, got the dialog shown below


So I clicked on “Sign In” in the above dialog signed in using my which is associated with my Azure Dev/Test subscription and ended up getting error below from Visual Studio 2013 after the sign in.


At this point I ‘m still not sure exactly what the heck is going on. I can successfully login to both Azure Portal (old and new) using my but Visual Studio was still crapping all over the place. What is up with the error messages VisualStudio? What ever happened to good user experience Smile 

So as a last resort I tried connecting to my Azure Subscription from Server explorer and got following error


Error above got me confused even more, other thing I noticed was in VS 2015 under account settings page it was showing one account for and another one for and both as Microsoft account. That did not make any sense to me. So I switched to PowerShell and ran Get-AzureAccount and surely there was two accounts, see below


What was interesting here was the second user had no subscription associated, it was just associated with a different tenant, keep in mind is not even an organization account, its just my business email hosted in google apps for business. What I noticed is that visual studio was adding this account when I was signing in using my Really bizarre stuff. At this point I knew something was up with my live account, to validate I added another live account to my existing dev/test azure subscription as co administrator and attempted to sign in using this account instead of the and everything worked as expected, this really confirmed my assumption.

I focused my full attention on trying to figure out what could be wrong with account, checked visual studio service settings, Azure AD side, couldn’t really figure out what could be the root cause. As a last resort I started go back like weeks and try to remember all the changes I might have done from my memory any thing that could have potentially caused this. Suddenly it dawned on me, my client had granted access to their office 365 SharePoint site to my business email and once I received the invitation email I clicked on the link and logged into the site using my Now my Microsoft account account is linked to my clients azure ad tenant for office 365 under email. I’m not 100% clear what is happening under the hood after you sign in to azure subscription from visual studio, but my guess is there is multiple bugs in that code per the behavior I’m seeing.

Basically after that point it completely broke the connect to Azure functionality from VisualStudio. I was able to successfully repro this using another live account that was working before. Steps to reproduce this issue are quite straight forward, see below.

  1. Login to an office 365 SharePoint site using administrator account and share site to an external email address, it can be anything, assuming you can check the email.
  2. Once you receive the invite email click on the link in email to access the SharePoint site (Make sure you clear cache and that there are no cookies left from previous logins). This will bring up a realm selection page where you will see two options Microsoft account and Organization account. Depending on how you login to your azure subscription, if you use a Microsoft account then you choose Microsoft account otherwise you’ll select Organization account and login.
  3. After you are successfully logged in, open VisualStudio, (2013 or 2015) and try to connect to azure subscription using the same account that you used to login to Office 365 SharePoint site. You can try connecting via server explorer, create a project and select host in cloud option, deploy resource group etc. nothing will work.

Once I removed the account from Azure AD tenant for Office 365 SharePoint site, everything will start working again. Hopefully this saves some time for others, been pulling my hair on this for last two days.



Setting up docker on Azure and deploying sample ASPNET vNext app

This post is just to document my experience running docker on Azure and deploying ASPNET vNext sample HelloWeb app. I’m super excited about the docker support in Azure and ASPNET vNext and I couldn’t wait any longer to try it out. At a high level, these are the steps I took:

  • Set up Docker Client on Virtual machine running Ubuntu desktop version.
  • Created Docker Host VM on Azure
  • Pull down the ASPNET vNext HelloWeb sample app
  • Created the Dockerfile
  • Build the container
  • Run the container
  • Create an endpoint port mapping for TCP port 80

Setting up Docker Client

While docker client can be set up on Windows and Mac OSX machines I decided to set it up as a virtual machine on my Mac with Ubuntu desktop on it.

Things we need to setup on docker client

  • Install node
  • Install Javascript package manager (npm)
  • Install Azure cross platform CLI tools
  • Connect to Azure subscription
  • Install docker client

Since azure cross platform CLI tools are written using nodejs, first thing we’ll need to do is Login to Ubuntu desktop virtual machine and install nodejs by running the command below

sudo apt-get install nodejs-legacy

Next install javascript package manager, run the command below

sudo apt-get install npm

Install Azure cross platform CLI tools, run command below

sudo npm install azure-cli --global

Connect to Azure Subscription

This process is very similar to azure powershell setup on windows machines. Download the publish settings file by running the command below. Command below will launch a browser session where you will need to login with windows live account after which the publish settings file will be downloaded to your local machine.

azure account download

Import the publish settings file

azure account import <publish settings file>

Finally Install Docker client

sudo apt-get install

Create docker host virtual machine on azure

We are going to use azure vm docker create command to create the docker host virtual machine on Azure. It uses virtual machine extensions feature in azure to install docker once the virtual machine is provisioned. If you are not familiar with virtual machine extensions I highly recommend reading this article. For a list of virtual machine extensions see. One important thing to remember here is that you should create the docker host vm using this command from your docker client, walkthroughs online did not explicitly mention this. The reason for this is one of the things the command does in addition to creating the virtual machine and installing docker on it is that it creates the certificates needed for docker client to be able to talk to docker host. So if you don’t run this command from your docker client machine you may need to do additional steps so the docker client can properly authenticate with docker host.

If you want to see the azure vm docker create command usage simply add –help or -h switch right after azure vm docker create.Before we can create the host vm for docker we need to identify the Linux image to use, run command below

azure vm image list | grep "20150123"

output from the command above should look something like screenshot shown below, you can see the image circled in red


Run the command below to create docker host VM on Azure.

azure vm docker create ram-docker-host "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB" rprakashg P@ssw0rd123 -e 22 -l "West US" -u -z "Large" -s "a8349281-7715-45c8-ac55-78787752ea7e"

Once the virtual machine is fully provisioned and running you can verify that your docker client can talk to the host by running following command.

docker --tls -H tcp:// info

Now that docker client and host is setup lets look at how we can run sample HelloWeb app published by aspnet team in docker.

Running ASPNET vNext sample app in docker

ASPNET team has published an excellent walkthrough here There is few things I had to do different which I will cover below.

I had some issue cloning the aspnet/home repository per the instructions in the article. I was getting a permission denied error and this had to do with how client machine authenticates with github, so I went down the path of setting up my docker client to talk to github and this really messed things up as docker client could not talk to docker host after I created new ssh key for github. This could be an issue with how I setup, ended up wasting hours :(. Finally I ended up doing an https cloning as shown below. You can find the https clone URL right on the repository page on github.

git clone

Building the container image

On docker client machine running Ubuntu desktop I had to run the build command in elevated privileges using sudo, additionally I had to make sure docker client was talking to host based on a public/default CA pool by adding –tls -H tcp:// (which is my docker host), to pretty much every docker command I was issuing. Not sure why default settings did not work. Definitely need to investigate this further. If you have seen the same issue and have an explanation let me know. Building the container image using the command below took a long time with https v/s default setting which I think is non networked unix socket.

sudo docker --tls -H tcp://  build -t hellowebapp .

If you want to check the status of the above command, simply copy the id string returned by the previous command and run the command below

sudo docker --tls -H tcp:// logs -t <replace with id string>

Running the container

Ran into similar issues here too like build, additionally container just wouldn’t start, encountered “System.FormatException: Value for switch ‘/bin/bash’ is missing” error when I tried to run the container using the command described in article. Worked fine once I started running all docker commands using the –tls –H options.

sudo docker --tls -H tcp:// run -d -t -p 80:5004 hellowebapp

Verify running containers on host

sudo docker --tls -H tcp:// ps -a

Create an endpoint port mapping for TCP port 80

Last step as discussed in the article is to create an endpoint port mapping for TCP port 80 on docker host virtual machine. For name select HTTP from dropdown, protocol should be TCP and enter 80 as value for public port and private port. See screen show below


Finally have the sample HelloWeb app successfully running inside docker. Check it out here

Next steps I’m hoping to build a much more real world app in ASPNET vNext and test it out. I’ve already got my Mac machine configured for ASPNET vNext development with Sublime Text.

Huge shout out to Linux team at Microsoft and everyone in the open source community who have contributed writing various tools etc. You can see Microsoft stack is truly becoming more and more open.

Useful links

Docker On Azure

Docker support in ASPNET vNext




Setting up Azure CLI on a Mac machine and creating Linux virtual machines from terminal

This post is more for me as a reference to steps I took to configure a Mac machine to connect to Azure and be able to create Linux virtual machines. If it helps others then great.

Azure command line tools for Mac and Linux allows us to create/manage virtual machines, web sites and azure mobile services from Mac and Linux desktops. Download and install azure SDK install for Mac here

Connecting to azure subscription

Before you can run operations on your Azure subscription you need to import your subscription, steps to do this are pretty similar to how you setup Azure PowerShell on windows machines

Fire up a new instance of terminal and run following command

azure account download

The above command will launch a browser session and take you to

Save your azure publish settings file locally.

Next execute command below to import your publish settings file

azure account import <file>

If everything went ok you should be good to go run operations on your Azure Subscription. Next we will create the Linux virtual machines, but before we can create the virtual machine we need to create a ssh certificate.

Creating “ssh” certificate

A compatible ssh key must be specified for authentication at the time of creating the Linux virtual machine in Azure. Supported formats for ssh certificates are .cer or .pem. Virtual machine creation will fail If you use any other format.  Run command below in a terminal window to generate a compatible ssh key for authentication.

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout rprakashg.key -out rprakashg.pem

Creating Linux Virtual Machines

We are going to use azure vm create command from a terminal window to create virtual machines. If you want to get help on command usage add –h or –help in the end

The parameters we are going to need to create Linux virtual machine are listed below

-n <virtual machine name>

-u <blob uri>

-z <virtual machine size>

-t <cert> Specify ssh certificate here for authentication.

-l <location> specifies location

s <subscription id>

-p <password>

-u <username>

Identifying the Ubuntu image to use

azure vm image list | grep "20150123"

20150123 is the date the image is created. See the output from the above command below, you can see the full image name for the ubuntu server you can use when creating the virtual machine.


Sample command to create a Linux virtual machine using the ssh certificate we created above

azure vm create "ubuntu-server1" "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB" -l "West US" -u "" -z "Medium" -s "a8349281-7715-45c8-ac55-78787752ea7e" -t rprakashg.pem -e 22 -p <password> -u "rprakashg"

Summarizing options for running Microsoft workloads on Google Cloud Platform

When it comes to Infrastructure as a Service (IaaS) both Amazon and Azure IMO dominates this space. Google also has IaaS offering on their cloud platform called Compute Engine. I have been using compute engine with windows images to run workloads such as SharePoint strictly for testing and learning purposes so I can make better recommendations to my customers who are looking to move their Microsoft workloads to cloud.

Currently there is only Windows Server 2008 R2 Datacenter Edition available as Operating System Images for compute engine instances. If you have google cloud platform SDK installed you can run the command below to see a list of windows images

gcloud compute images list --project windows-cloud --no-standard-images

Just like Azure VM when you create a compute engine instance using windows image you have to pass a username and password. You can have passwords stored in a password file and specify it at the time of creating the virtual machine.

Once the compute engine instance is up and running you can remote into it using an RDP client. From a MAC machine i use the Remote Desktop App which is available free from app store. You do need to enable RDP before you can remote into the compute engine instance. If you haven’t seen my post on this see enabling RDP

If you have done a lot of work on Azure IaaS you will find that there is simply no PowerShell support available natively within GCE. I have been running operations on compute engine from powershell using this PowerShell function below. Within my powershell script I build the arguments for gcloud command line utility and call this function.

Function Run-CloudCommand(){



        [string] $Arguments



    $pinfo = New-Object System.Diagnostics.ProcessStartInfo

    $pinfo.FileName = "gcloud.cmd"

    $pinfo.Arguments = $Arguments

    $pinfo.RedirectStandardError = $true

    $pinfo.RedirectStandardOutput = $True

    $pinfo.UseShellExecute = $false

    $pinfo.WorkingDirectory = "c:\program files\google\cloud sdk\google-cloud-sdk\bin"


    $p = New-Object System.Diagnostics.Process

    $p.StartInfo = $pinfo

    $p.Start() | Out-Null 



    $stdout = $p.StandardOutput.ReadToEnd()

    $stderr = $p.StandardError.ReadToEnd()


    if($p.ExitCode -ne 0)


        Write-Error $stderr




      return $stdout



I would hope Google is working on PowerShell SDK , this would be a nice open source initiative to work on if Google has no plans. Certainly wont be that hard to write a set of custom powershell commandlets that interacts with REST APIs. Would certainly be nice to know if folks are interested.

Remote powershell on windows based compute engine instances

No native remote powershell support is available like Azure, however you can get remote powershell enabled on compute engine instance, if you haven’t seen my post on this check it out here

What can you run on Google cloud platform

You can run server products listed below on compute engine.

  • MS Exchange Server

  • SharePoint Server
  • SQL Server Standard Edition
  • SQL Server Enterprise Edition
  • Lync Server
  • System Center Server
  • Dynamics CRM Server
  • Dynamics AX Server
  • MS Project Server
  • Visual Studio Deployment
  • Visual Studio Team Foundation Server
  • BizTalk Server
  • Forefront Identity Manager
  • Forefront Unified Access Gateway
  • Remote Desktop Services

For SQL Server the number of license required by compute engine instance is tied to the number of virtual cores, so for ex. if you used machine type n1-standard-2 which has two virtual cores you would need 2 SQL Server standard or enterprise license.

Source for the above info is this article. The article also contains lot of information regarding running Microsoft Software on compute engine and provides additional details on process to go through etc.

I’ve seen in forums and online folks talk about windows based compute engine instances start up slow, but I personally have not felt that way. In fact I found windows instances starting up and shut down faster than Azure VMs

Machine types

Machine type determines the spec for your virtual machine instance such as amount of RAM, number of virtual cores, and persistent disk limits etc. Compute engine has four classes of machine types.

  • Standard machine types

  • High CPU machine types
  • High memory machine types
  • Shared-core machine types

You can have up to 16 persistent disks with total disk size up to 10 TB attached to all machine types except  shared-code machine types

More info on machine types see this article


Compute engine is priced cheaper compared to Azure VMs but far less options available for machine types. Couple of interesting things here to know are you are changed a minimum of 10 minutes, what this means is if you started an instance and use it for 5 minutes you are still paying for 10 minutes. Apart from being priced competitively Google also offers sustained use discounts. For more info on pricing check out this article 

If you are interested in trying out Google cloud platform head over to this link and sign up for trial. You can get a $300 credit to use for 6 months.

If you run into any issues and you need support, ask a question on stackoverflow and tag it with google-compute-engine