Azure Api App – Continuous deployment to Azure from local git repository

You may or may not have seen this article that talks about continuous deployment to azure for app service. The thing I want to point out is that the article covers within the context of a web app. You can pretty much do the same thing for API apps as well. If you just looked at the properties of the API app it may not be quite clear. UI is not quite intuitive at this point, hopefully by GA all these will be sorted out. This post I will cover how to configure continuous deployment for API apps.

Once you develop the API app in visual studio, you first need to publish it. If you are new to API app, have a look at Brady Gaster’s article on azure.com. After you have published the Api app to azure, select the Api app in azure portal and click on API app host link (see screen shot below). API app host is basically an Azure web app, were we will need to follow the same steps described in the article for continuous deployment that I shared earlier. If you haven’t configured continuous deployment for web apps I suggest you read that article before also there is plenty of walkthrough’s online published by various folks within azure team.

image

For the purposes of this article I’ve gone and configured continuous deployment for the api host web app for twitterinsights api app. Currently the API looks like below

image

You can also see from the screen shot below that I have connected the api host web app to local git repository and set deployment credentials as well as provided hook for my local git repository (remote git URL for api host web app)

image

Next I will go ahead and update the api and commit to my local git repository, this is just to illustrate the continuous deployment is working correctly. For demonstration purposes I updated the route as shown below

image

Next we will need to push the changes to remote repository by running command below, again this is clearly described in the continuous deployment article shared earlier.

image

You can see from the screenshot below that changes were successfully deployed

image

Finally just to prove that there is no smoke and mirrors you can see in the swagger UI below that my changes are indeed pushed to azure.

image

Hope that helps, one final word is you can use all the supported source control repositories it doesn’t just have to be local git repository. Hope you are just excited as I am about Azure App Service and the possibilities it brings for developers.

Cheers,

Ram

Error connecting to Azure Subscription from VisualStudio

I wanted to make this post with the hope of saving others time. Last two days I have been troubleshooting issue connecting to my Azure Dev/Test subscription. I had developed a custom resource group template and was getting ready to test the template by doing a sample deployment to my azure subscription. Unfortunately I simply could not get past the sign-in in “deploy to resource group” dialog in Visual Studio 2015. Since I had multiple subscriptions on my laptop as most of us do when I pick the Microsoft account from drop down in this case “rprakashg@hotmail.com” I would get the visual studio sign in page where I enter “rprakashg@hotmail.com”, STS would then redirect me to live and I’m able to login there and would come back to the “deploy to resource group” dialog but all the fields remained read only. I had no idea why the fields were still read only. Unfortunately what had happened was my sign-in process had failed (not that I used invalid login or anything), it would have been helpful if VisualStudio had shown some error message to me, additionally there was nothing visually in the dialog that would have given me some indication that there was some problem with sign in, as you can see from the screen shot below.

image

So I popped back to Visual Studio 2013 and opened the same resource group template project and tried to deploy and here is what I found. In the deploy to resource group dialog looks like below in VS 2013, when I tried to sign in to my azure Dev/Test subscription I basically got the same behavior as VS 2015 but the difference is I could visually see that I was not signed in even though I successfully signed in to live. If the sign in was successful the button text changes to Sign out and all the fields become enabled.

image

Either way there should be some sort of error message shown to user when there is a problem with sign in. At this point I needed to find out what is really happening so I tried creating a new ASP.NET project in Visual Studio 2015 and selected “Host in the Cloud” Project was successfully created but no cloud resources were provisioned in my azure subscription and there was no error messages as well, VisualStudio 2015 had decided to fail gracefully. I attempted to do the same thing in Visual Studio 2013 and right after click ok in project creation dialog, got the dialog shown below

image 

So I clicked on “Sign In” in the above dialog signed in using my rprakashg@hotmail.com which is associated with my Azure Dev/Test subscription and ended up getting error below from Visual Studio 2013 after the sign in.

image

At this point I ‘m still not sure exactly what the heck is going on. I can successfully login to both Azure Portal (old and new) using my rprakashg@hotmail.com but Visual Studio was still crapping all over the place. What is up with the error messages VisualStudio? What ever happened to good user experience Smile 

So as a last resort I tried connecting to my Azure Subscription from Server explorer and got following error

image

Error above got me confused even more, other thing I noticed was in VS 2015 under account settings page it was showing one account for rprakashg@hotmail.com and another one for ram.gopinathan@marviewsolutions.com and both as Microsoft account. That did not make any sense to me. So I switched to PowerShell and ran Get-AzureAccount and surely there was two accounts, see below

image

What was interesting here was the second user had no subscription associated, it was just associated with a different tenant, keep in mind ram.gopinathan@marviewsolutions.com is not even an organization account, its just my business email hosted in google apps for business. What I noticed is that visual studio was adding this account when I was signing in using my rprakashg@hotmail.com. Really bizarre stuff. At this point I knew something was up with my rprakashg@hotmail.com live account, to validate I added another live account rprakashg@outlook.com to my existing dev/test azure subscription as co administrator and attempted to sign in using this account instead of the rprakashg@hotmail.com and everything worked as expected, this really confirmed my assumption.

I focused my full attention on trying to figure out what could be wrong with rprakashg@hotmail.com account, checked visual studio service settings, Azure AD side, couldn’t really figure out what could be the root cause. As a last resort I started go back like weeks and try to remember all the changes I might have done from my memory any thing that could have potentially caused this. Suddenly it dawned on me, my client had granted access to their office 365 SharePoint site to my business email ram.gopinathan@marviewsolutions.com and once I received the invitation email I clicked on the link and logged into the site using my rprakashg@hotmail.com. Now my Microsoft account rprakashg@hotmail.com account is linked to my clients azure ad tenant for office 365 under ram.gopinathan@marviewsolutions.com email. I’m not 100% clear what is happening under the hood after you sign in to azure subscription from visual studio, but my guess is there is multiple bugs in that code per the behavior I’m seeing.

Basically after that point it completely broke the connect to Azure functionality from VisualStudio. I was able to successfully repro this using another live account that was working before. Steps to reproduce this issue are quite straight forward, see below.

  1. Login to an office 365 SharePoint site using administrator account and share site to an external email address, it can be anything, assuming you can check the email.
  2. Once you receive the invite email click on the link in email to access the SharePoint site (Make sure you clear cache and that there are no cookies left from previous logins). This will bring up a realm selection page where you will see two options Microsoft account and Organization account. Depending on how you login to your azure subscription, if you use a Microsoft account then you choose Microsoft account otherwise you’ll select Organization account and login.
  3. After you are successfully logged in, open VisualStudio, (2013 or 2015) and try to connect to azure subscription using the same account that you used to login to Office 365 SharePoint site. You can try connecting via server explorer, create a asp.net project and select host in cloud option, deploy resource group etc. nothing will work.

Once I removed the account from Azure AD tenant for Office 365 SharePoint site, everything will start working again. Hopefully this saves some time for others, been pulling my hair on this for last two days.

Cheers,

</Ram>

Setting up docker on Azure and deploying sample ASPNET vNext app

This post is just to document my experience running docker on Azure and deploying ASPNET vNext sample HelloWeb app. I’m super excited about the docker support in Azure and ASPNET vNext and I couldn’t wait any longer to try it out. At a high level, these are the steps I took:

  • Set up Docker Client on Virtual machine running Ubuntu desktop version.
  • Created Docker Host VM on Azure
  • Pull down the ASPNET vNext HelloWeb sample app
  • Created the Dockerfile
  • Build the container
  • Run the container
  • Create an endpoint port mapping for TCP port 80

Setting up Docker Client

While docker client can be set up on Windows and Mac OSX machines I decided to set it up as a virtual machine on my Mac with Ubuntu desktop on it.

Things we need to setup on docker client

  • Install node
  • Install Javascript package manager (npm)
  • Install Azure cross platform CLI tools
  • Connect to Azure subscription
  • Install docker client

Since azure cross platform CLI tools are written using nodejs, first thing we’ll need to do is Login to Ubuntu desktop virtual machine and install nodejs by running the command below

sudo apt-get install nodejs-legacy

Next install javascript package manager, run the command below

sudo apt-get install npm

Install Azure cross platform CLI tools, run command below

sudo npm install azure-cli --global

Connect to Azure Subscription

This process is very similar to azure powershell setup on windows machines. Download the publish settings file by running the command below. Command below will launch a browser session where you will need to login with windows live account after which the publish settings file will be downloaded to your local machine.

azure account download

Import the publish settings file

azure account import <publish settings file>

Finally Install Docker client

sudo apt-get install docker.io

Create docker host virtual machine on azure

We are going to use azure vm docker create command to create the docker host virtual machine on Azure. It uses virtual machine extensions feature in azure to install docker once the virtual machine is provisioned. If you are not familiar with virtual machine extensions I highly recommend reading this article. For a list of virtual machine extensions see. One important thing to remember here is that you should create the docker host vm using this command from your docker client, walkthroughs online did not explicitly mention this. The reason for this is one of the things the command does in addition to creating the virtual machine and installing docker on it is that it creates the certificates needed for docker client to be able to talk to docker host. So if you don’t run this command from your docker client machine you may need to do additional steps so the docker client can properly authenticate with docker host.

If you want to see the azure vm docker create command usage simply add –help or -h switch right after azure vm docker create.Before we can create the host vm for docker we need to identify the Linux image to use, run command below

azure vm image list | grep "20150123"

output from the command above should look something like screenshot shown below, you can see the image circled in red

linuximg

Run the command below to create docker host VM on Azure.

azure vm docker create ram-docker-host "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB" rprakashg P@ssw0rd123 -e 22 -l "West US" -u https://rprakashg.blob.core.windows.net -z "Large" -s "a8349281-7715-45c8-ac55-78787752ea7e"

Once the virtual machine is fully provisioned and running you can verify that your docker client can talk to the host by running following command.

docker --tls -H tcp://ram-docker-host.cloudapp.net:4243 info

Now that docker client and host is setup lets look at how we can run sample HelloWeb app published by aspnet team in docker.

Running ASPNET vNext sample app in docker

ASPNET team has published an excellent walkthrough here There is few things I had to do different which I will cover below.

I had some issue cloning the aspnet/home repository per the instructions in the article. I was getting a permission denied error and this had to do with how client machine authenticates with github, so I went down the path of setting up my docker client to talk to github and this really messed things up as docker client could not talk to docker host after I created new ssh key for github. This could be an issue with how I setup, ended up wasting hours :(. Finally I ended up doing an https cloning as shown below. You can find the https clone URL right on the repository page on github.

git clone https://github.com/aspnet/Home.git

Building the container image

On docker client machine running Ubuntu desktop I had to run the build command in elevated privileges using sudo, additionally I had to make sure docker client was talking to host based on a public/default CA pool by adding –tls -H tcp://ram-docker-host.cloudapp.net:4243 (which is my docker host), to pretty much every docker command I was issuing. Not sure why default settings did not work. Definitely need to investigate this further. If you have seen the same issue and have an explanation let me know. Building the container image using the command below took a long time with https v/s default setting which I think is non networked unix socket.

sudo docker --tls -H tcp://ram-docker-host.cloudapp.net:4243  build -t hellowebapp .

If you want to check the status of the above command, simply copy the id string returned by the previous command and run the command below

sudo docker --tls -H tcp://ram-docker-host.cloudapp.net:4243 logs -t <replace with id string>

Running the container

Ran into similar issues here too like build, additionally container just wouldn’t start, encountered “System.FormatException: Value for switch ‘/bin/bash’ is missing” error when I tried to run the container using the command described in article. Worked fine once I started running all docker commands using the –tls –H options.

sudo docker --tls -H tcp://ram-docker-host.cloudapp.net:4243 run -d -t -p 80:5004 hellowebapp

Verify running containers on host

sudo docker --tls -H tcp://ram-docker-host.cloudapp.net:4243 ps -a

Create an endpoint port mapping for TCP port 80

Last step as discussed in the article is to create an endpoint port mapping for TCP port 80 on docker host virtual machine. For name select HTTP from dropdown, protocol should be TCP and enter 80 as value for public port and private port. See screen show below

image

Finally have the sample HelloWeb app successfully running inside docker. Check it out here http://ram-docker-host.cloudapp.net/

Next steps I’m hoping to build a much more real world app in ASPNET vNext and test it out. I’ve already got my Mac machine configured for ASPNET vNext development with Sublime Text.

Huge shout out to Linux team at Microsoft and everyone in the open source community who have contributed writing various tools etc. You can see Microsoft stack is truly becoming more and more open.

Useful links

Docker On Azure

Docker support in ASPNET vNext

Docker

Cheers,

</Ram>

Setting up Azure CLI on a Mac machine and creating Linux virtual machines from terminal

This post is more for me as a reference to steps I took to configure a Mac machine to connect to Azure and be able to create Linux virtual machines. If it helps others then great.

Azure command line tools for Mac and Linux allows us to create/manage virtual machines, web sites and azure mobile services from Mac and Linux desktops. Download and install azure SDK install for Mac here

Connecting to azure subscription

Before you can run operations on your Azure subscription you need to import your subscription, steps to do this are pretty similar to how you setup Azure PowerShell on windows machines

Fire up a new instance of terminal and run following command

azure account download

The above command will launch a browser session and take you to https://windows.azure.com/download/publishprofile.aspx

Save your azure publish settings file locally.

Next execute command below to import your publish settings file

azure account import <file>

If everything went ok you should be good to go run operations on your Azure Subscription. Next we will create the Linux virtual machines, but before we can create the virtual machine we need to create a ssh certificate.

Creating “ssh” certificate

A compatible ssh key must be specified for authentication at the time of creating the Linux virtual machine in Azure. Supported formats for ssh certificates are .cer or .pem. Virtual machine creation will fail If you use any other format.  Run command below in a terminal window to generate a compatible ssh key for authentication.

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout rprakashg.key -out rprakashg.pem

Creating Linux Virtual Machines

We are going to use azure vm create command from a terminal window to create virtual machines. If you want to get help on command usage add –h or –help in the end

The parameters we are going to need to create Linux virtual machine are listed below

-n <virtual machine name>

-u <blob uri>

-z <virtual machine size>

-t <cert> Specify ssh certificate here for authentication.

-l <location> specifies location

-s <subscription id>

-p <password>

-u <username>

Identifying the Ubuntu image to use

azure vm image list | grep "20150123"

20150123 is the date the image is created. See the output from the above command below, you can see the full image name for the ubuntu server you can use when creating the virtual machine.

image

Sample command to create a Linux virtual machine using the ssh certificate we created above

azure vm create "ubuntu-server1" "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB" -l "West US" -u "https://rprakashg.blob.core.windows.net" -z "Medium" -s "a8349281-7715-45c8-ac55-78787752ea7e" -t rprakashg.pem -e 22 -p <password> -u "rprakashg"

Summarizing options for running Microsoft workloads on Google Cloud Platform

When it comes to Infrastructure as a Service (IaaS) both Amazon and Azure IMO dominates this space. Google also has IaaS offering on their cloud platform called Compute Engine. I have been using compute engine with windows images to run workloads such as SharePoint strictly for testing and learning purposes so I can make better recommendations to my customers who are looking to move their Microsoft workloads to cloud.

Currently there is only Windows Server 2008 R2 Datacenter Edition available as Operating System Images for compute engine instances. If you have google cloud platform SDK installed you can run the command below to see a list of windows images

gcloud compute images list --project windows-cloud --no-standard-images

Just like Azure VM when you create a compute engine instance using windows image you have to pass a username and password. You can have passwords stored in a password file and specify it at the time of creating the virtual machine.

Once the compute engine instance is up and running you can remote into it using an RDP client. From a MAC machine i use the Remote Desktop App which is available free from app store. You do need to enable RDP before you can remote into the compute engine instance. If you haven’t seen my post on this see enabling RDP

If you have done a lot of work on Azure IaaS you will find that there is simply no PowerShell support available natively within GCE. I have been running operations on compute engine from powershell using this PowerShell function below. Within my powershell script I build the arguments for gcloud command line utility and call this function.

Function Run-CloudCommand(){

    param(

        [Parameter(Mandatory=$True)]

        [string] $Arguments

    )

 

    $pinfo = New-Object System.Diagnostics.ProcessStartInfo

    $pinfo.FileName = "gcloud.cmd"

    $pinfo.Arguments = $Arguments

    $pinfo.RedirectStandardError = $true

    $pinfo.RedirectStandardOutput = $True

    $pinfo.UseShellExecute = $false

    $pinfo.WorkingDirectory = "c:\program files\google\cloud sdk\google-cloud-sdk\bin"

    

    $p = New-Object System.Diagnostics.Process

    $p.StartInfo = $pinfo

    $p.Start() | Out-Null 

    $p.WaitForExit()

    

    $stdout = $p.StandardOutput.ReadToEnd()

    $stderr = $p.StandardError.ReadToEnd()

 

    if($p.ExitCode -ne 0)

    {

        Write-Error $stderr

    }

    else

    {

      return $stdout

    }

}

I would hope Google is working on PowerShell SDK , this would be a nice open source initiative to work on if Google has no plans. Certainly wont be that hard to write a set of custom powershell commandlets that interacts with REST APIs. Would certainly be nice to know if folks are interested.

Remote powershell on windows based compute engine instances

No native remote powershell support is available like Azure, however you can get remote powershell enabled on compute engine instance, if you haven’t seen my post on this check it out here

What can you run on Google cloud platform

You can run server products listed below on compute engine.

  • MS Exchange Server

  • SharePoint Server
  • SQL Server Standard Edition
  • SQL Server Enterprise Edition
  • Lync Server
  • System Center Server
  • Dynamics CRM Server
  • Dynamics AX Server
  • MS Project Server
  • Visual Studio Deployment
  • Visual Studio Team Foundation Server
  • BizTalk Server
  • Forefront Identity Manager
  • Forefront Unified Access Gateway
  • Remote Desktop Services

For SQL Server the number of license required by compute engine instance is tied to the number of virtual cores, so for ex. if you used machine type n1-standard-2 which has two virtual cores you would need 2 SQL Server standard or enterprise license.

Source for the above info is this article. The article also contains lot of information regarding running Microsoft Software on compute engine and provides additional details on process to go through etc.

I’ve seen in forums and online folks talk about windows based compute engine instances start up slow, but I personally have not felt that way. In fact I found windows instances starting up and shut down faster than Azure VMs

Machine types

Machine type determines the spec for your virtual machine instance such as amount of RAM, number of virtual cores, and persistent disk limits etc. Compute engine has four classes of machine types.

  • Standard machine types

  • High CPU machine types
  • High memory machine types
  • Shared-core machine types

You can have up to 16 persistent disks with total disk size up to 10 TB attached to all machine types except  shared-code machine types

More info on machine types see this article

Pricing

Compute engine is priced cheaper compared to Azure VMs but far less options available for machine types. Couple of interesting things here to know are you are changed a minimum of 10 minutes, what this means is if you started an instance and use it for 5 minutes you are still paying for 10 minutes. Apart from being priced competitively Google also offers sustained use discounts. For more info on pricing check out this article 

If you are interested in trying out Google cloud platform head over to this link and sign up for trial. You can get a $300 credit to use for 6 months.

If you run into any issues and you need support, ask a question on stackoverflow and tag it with google-compute-engine

Cheers,

</Ram>

Creating a base SharePoint 2013 image for Google Compute Engine (GCE)

Incase anyone not familiar with Google Compute Engine, it is the Infrastructure as a Service (IaaS) capability available on Google Cloud Platform. Google now supports Windows Server images, unfortunately only Windows Server 2008 R2 DataCenter edition SP1 is available at the time of writing this article.

Bit of a background for this. I’m testing deploying SharePoint on Google Compute Engine and I needed a base image that has all the required software in it. Additionally there are some steps you have to perform in GCE if you want to be able to assign static IP addresses for your virtual machine. I did not want to keep doing the same thing each time I create a virtual machine in GCE. So this article basically covers how I built a base windows image that contains SharePoint + the configurations required so the virtual machine can have a static IP address assigned. You can follow the same methods to create base windows images that contains software and configuration that your application needs.

First thing create a new virtual machine using the windows server image that is currently available.

gcloud compute --project "ce-playground" instances create "sp-base-image" --zone "us-central1-a" --machine-type "n1-standard-2" --network "sp-farm-net" --metadata "gce-initial-windows-user=rprakashg" "gce-initial-windows-password=P@ssw0rd" --maintenance-policy "MIGRATE" --scopes "https://www.googleapis.com/auth/devstorage.read_only" "https://www.googleapis.com/auth/logging.write" --image "https://www.googleapis.com/compute/v1/projects/windows-cloud/global/images/windows-server-2008-r2-dc-v20150110" --boot-disk-type "pd-standard" --boot-disk-device-name "sp-base-image" --can-ip-forward

if you look at the above command for machine type I used n1-standard-2 and for boot disk I used pd-standard, you have the option to use SSD here if you like. Also –can-ip-forward is used to enable IP routing for virtual machines. Compute Engine does not support assigning static network IP address for virtual machines. you can use a combination of routes and instances –can-ip-forward ability to work around this.

Download the RDP file for the newly created virtual machine and remote into the virtual machine and perform steps below. This is required since we want to assign static network IP address. Reason why we care about this is that the internal network IP addresses are managed by compute engine and can change when you start/stop instances.

Enable Windows Loopback Adapter

Windows Loopback adapter will allow assigning of static IP address to a virtual machine. Follow steps below to enable loopback adapter.

  • Type Device Manager in Start menu
  • From the device manager right click on the virtual machine name and select add legacy hardware
  • Click Next on the welcome screen.
  • Select Install the hardware that I manually select from a list and click Next.
  • Select Network Adapter from the list
  • Select Microsoft from the manufactures list and Microsoft Loopback Adapter from the network adapters list and click next

Add Windows firewall rule that allows ICMP traffic

To support pinging we will add a firewall rule to allow ICMP traffic

  • type Windows Firewall with Advanced Security in start menu
  • Right click on Inbound Rules and select New Rule
  • Select Custom for rule type and click next
  • Keep default settings for Program and click next
  • From the Protocol Type dropdown select ICMPV4 and click next
  • Keep default settings for Scope, Action, Profile
  • Provide a Name and Description for rule and click finish (For the purpose of this post I used ICMP)

Enable IP Forwarding

  • Run regedit
  • Switch to HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > services > Tcpip > Parameters.
  • Set value for IPEnableRouter property to 1 (This enables IP routing for the instance)
  • Click OK

Windows Updates

Next we will set the windows update settings to “download updates but let me chose whether to install them” (By default its set to automatic download and install, we don’t want that as we want to control what updates get installed on virtual machine). Next we will install Microsoft update to get updates for other products such as Office, SharePoint etc. Apply outstanding updates to keep the virtual machine up to date on patches.

Install SharePoint on the Virtual Machine

Next we are going to install SharePoint with all the Pre-Requisites on the virtual machine. This image is going to be a base image for SharePoint 2013 with Service Pack 1. I’ve downloaded this from my MSDN subscription. Depending on your scenario you might choose to use a different version of SharePoint.

Double click on default application, this will bring up a splash screen. Click on Install Software prerequisites

image 

This will bring up the prerequisite installer tool. Click Next, Accept the terms of agreement and install all prerequisites.

image

prerequisites installer tool will reboot once during the install, after the reboot installer will automatically continue and complete. Once the prerequisites are successfully installed, we can go ahead and install SharePoint Server by clicking on the Install SharePoint Server link in splash screen. (note: due to the reboot during install of prerequisites we need to fire up the splash screen again)

Since I’m using SharePoint 2013 with SP1 downloaded from MSDN a valid product key must be entered before I can continue with SharePoint Server install. You can also use evaluation version of SharePoint Server bits as you probably don’t want to use a licensed version in the image. Since this image is going to be a private used strictly by me I can safely use my MSDN product key. If you are going to share the image with others you are going to want to use evaluation version instead of giving away your key.

For Server Type keep Complete selected and click on install now

image

After installation is complete uncheck Run the SharePoint Products Configuration Wizard now. Since we are building a base image we don’t want to do this.

image

I also ran a PowerShell script that does couple of things

  • Turn off unneeded services
  • Apply disable loopback check fix
  • Turn off CRL check

Created a folder named Scripts under C drive and copied some additional PowerShell scripts that automates configuration of SharePoint. Once the Virtual Machine comes up you can simply run the scripts.

At this point we are ready to turn this virtual machine into a base image.

Run command prompt elevated, type gcesysprep and hit enter. (note: Don’t run the standard sysprep utility that we use to sysprep windows images). Gcesysprep utility will terminate the virtual machine instance, we can simply delete the virtual machine without deleting the persistent disk by running following command

gcloud compute instances delete sp2013-image --keep-disks boot

Next we need to create a snapshot out of the root persistent disk. Snapshot allows us to create new persistent disk with the data from the snapshot, additionally you can restore to larger size than what was originally used or even a different type of disk that was originally used. Run following command to create the snapshot of our sp2013-image disk that was syspreped using the gcesysprep utility.

gcloud compute disks snapshot "sp2013-image" --project "ce-playground" --snapshot-names "sp2013-with-sp1-win2008r2sp1"

The above command will return a URI for the snapshot, you’ll want to write this down. Next thing we’ll need to do is create a new persistent disk using the snapshot that we created earlier.

gcloud compute disks create "sp2013withsp1onwin2k8r2sp1" --source-snapshot "https://www.googleapis.com/compute/v1/projects/ce-playground/global/snapshots/sp2013-with-sp1-win2008r2sp1" --project "ce-playground" --zone "us-central1-a"

Next we will create an image using the new persistent disk that was created in previous step

gcloud compute images create "sharepoint-server-2013-sp1" --source-disk "sp2013withsp1onwin2k8r2sp1" --source-disk-zone "us-central1-a" --project "ce-playground"

You can see from the screenshot below that my custom image is now available for me and I can create new virtual machines using this image.

image

You can run following command to see all the metadata associated with your custom image

gcloud compute images describe "sharepoint-server-2013-sp1" --project "ce-playground"

At this point we can create new virtual machines using this image. Once the virtual machine is created you can simply RDP into the virtual machine and run the PowerShell script to Create a new SharePoint Farm or join to an existing SharePoint Farm. This significantly cuts down the time required to get infrastructure up and running in Google Cloud Platform to run SharePoint workload. I plan to do some performance testing. My goal is to compare how Google Compute Engine stack up against Azure IaaS specifically when it comes to running large workloads like SharePoint in the cloud.

Cheers,

</Ram>

Enabling remote PowerShell for windows virtual machines in Google Compute Engine

As you might already know you can now create windows virtual machines in compute engine, with that one of the common scenarios that almost everyone will encounter is being able to run remote powershell commands on the virtual machine, this could be things like automating install/configuration of additional software or components as an example. This post I’m going to cover steps you can take to enable remote powershell.

On the virtual machine

Run windows update, select all options including optional .NET framework 4.5. This will make sure the virtual machine is current on updates.

Once the virtual machine is back online, RDP into the virtual machine and run powershell command below to configure powershell remoting.

Enable-PSRemoting –Force

Virtual network configurations to be done in google cloud console or command line using gcloud utility

We need to create some firewall rules for the virtual network to allow powershell remoting. Ports used by powerShell remoting are TCP/5985 for HTTP and TCP/5986 for https. . I named my rules as shown below table

Rule Name Protocol:Port
allow-powershell-http tcp:5985
allow-powershell-https tcp:5986

On the client machine

Since we are going to be talking to a virtual machine that is in a different domain if you try to do Enter-PSSession –ComputerName {VM public IP} by default this is not going to work. I won’t go too much into detail about this as it’s already described here http://blogs.technet.com/b/heyscriptingguy/archive/2013/11/29/remoting-week-non-domain-remoting.aspx

Run the below powershell to add the public IP of the VM running in compute engine to trusted hosts list. (Keep in mind if you stop VM to save you from getting billed make sure the IP has not changed next time when you bring it back up)

Set-Item -Path WSMan:\localhost\Client\TrustedHosts -Value {replace with public ip of VM} -Concatenate -Force

At this time you should be all set to remote powershell into your virtual machines running on compute engine. Hopefully some day remote powershell support will be baked into windows based virtual machines in compute engine so you won’t have to worry about the manual steps after virtual machine is provisioned and can run your automation powershell scripts directly on virtual machine after it is provisioned successfully and online.

Google is giving away $300 credit for use for 60 days anyone who wants to try out Google Cloud Platform. See more info here https://cloud.google.com/free-trial/. Go sign up and check it out.

Hope that helps,

</Ram>

Enabling remote access for windows virtual machines in Google Compute Engine (GCE)

When you create a windows virtual machine in Google Compute Engine (GCE) and use a virtual network other than the default and you try to remote desktop into the newly created virtual machine you will get an error “remote access to server is not enabled”

image

Reason for this is because there are no firewall rules configured for tcp port 3389. If you go to the virtual network you used when VM was created and look under firewall rules you’ll see nothing. To enable RDP you simply need to create a new firewall rule for tcp port 3389

See below

image

After the firewall rule is created you should be able to RDP into the Virtual Machine

It would be nice if this is done automatically when the VM is created, azure does this already, for ex. when a Linux VM is created TCP port 22 automatically gets added to the endpoints for enabling SSH into the VM

Hope that helps

</Ram>

“ToJson()” extension method

I found myself having to serialize objects into Json string on many occasions that it finally dawned on me that I could just have an extension method ToJson() just like the ToString() method on object. Here is the code for it

Assuming you have json.net library included in your project, create a static class and name it JsonExtensionMethods. Add following using statements

using Newtonsoft.Json;

using Newtonsoft.Json.Serialization;

Add an extension method ToJson to this class. In this method we’ll use JsonSerializerSettings class to specify that we are not interested in null properties, additionally we are going to use “CamelCasePropertyNamesContractResolver” so that the resulting Json string will follow camelcase convention. See code below

You’ll notice a ToJson() method now for all your classes

Hope that helps,

Cheers,

</Ram>