Beyond The Corner Office

Blog

My Photographic Journey

Corey and I moved to Lafayette from New Orleans in April of last year.  Around the same time I acquired my Sony a6000 have have used it to capture our travels together.  Recently I realized that I have done poor job in capturing the world close to home.  I also realized that I have also done a poor job of talking about my photographic journey.

20161001-lake20martin-022_1-00820-20full-m

This site has been dedicated primarily to my technology posts but I have really be cataloging that through work.  You can find my latest technology posts in the locations below.

So to that end, the current posts will certainly remain here but future posts will likely be about photographing my journey.  I am looking forward to sharing my journey with you.  If you are interested in following me on photo social media sites, check out the links below.  You can check out my full portfolio through link in the menu above.

Earlier this month I attended a DevOps bootcamp event Microsoft hosted in one of our Bellevue offices.  We were able to bring in members of the product group to discuss how Microsoft approaches DevOps internally and how it has contributed to the incredible release pace for Azure feature.  During the ensuing discussion, the book The Phoenix Project was mentioned.  It was not a title I had heard but my interest was piqued so I downloaded it on my Kindle for the flight home.  What I uncovered was a great story about how all of IT, through the use of DevOps, can be a competitive advantage or a business anchor.  The choice of which, is completely up to every organization to decide.

Many of the books that I read about technology take take one of two routes.  The first type is technical and include click here or type this line of code.  The others sell themselves as IT books but are really more about business processes.  I found The Phoenix Project to be closer aligned with the latter but was deeper than most at relating the importance of IT integration in the business.  The authors are able to allow everyone, even non-technical readers to understand the challenges and needs to approach IT with a DevOps mentality.

“DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.”—Donovan Brown in the book, “DevOps on the Microsoft Stack” (Wouter de Kort, 2016).

While fiction, as someone with 14 years of IT experience under my belt, I could see The Phoenix Project story play out at any number of companies.  The challenges are very relatable and while approached from the IT perspective, it would be remiss for someone outside of IT to dismiss the story.  I don’t want to spoil any of the details for a potential reader, but approaching IT as if it were a factory allows non-IT personnel to also understand DevOps principles and in my opinion very worth the read.

Are you using DevOps in your organization?  Have you read The Project Project?  I would love to hear your thoughts on how the principles outlined in the book play out in your day to day operations in the comments below.  Don’t forget to follow me on Twitter and follow the Microsoft US Azure Partner Community to stay up to date on the latest about DevOps and the Microsoft cloud.

As you have seen, I have been doing quite a bit of work with ARM templates and VMs recently.  This post is no different.  I have been working on a project where multiple VMs need to be created from a custom image and they need to be joined to an existing domain.  In this post I will walk through the elements of the ARM template I created.

NOTE: This template is not based on any best practices, simply a proof of concept

Tl;dr – Grab the template from my GitHub account.

Creating Multiple Resources

The power of ARM templates is the ability to create complex environments from a single definition file.  Part of that power comes in the ability to create multiple resources of the same type.  This happens through the use of the copy tag when defining a resource.

copy:{
"name": "storagecopy",
"count": "[parameters('count')]"
}

Access to the current iteration can be done through the use of the copyIndex() function.  This provides the flexibility append it to names creating a unique name for each iteration.  An example of this can be seen in the "name": example below.

"name": "[concat(variables('storageAccountName'),copyIndex())]"

# Virtual Machines from a Custom Image

Before we dive into the template it is important to note, at time of writing this, the virtual machine custom image must be in the same storage account as the .vhd that will be deployed with the new Virtual Machines.  It is for this reason that this template creates a “Transfer VM” with a custom script extension.  This script uses PowerShell and AZCopy to move the image from one storage account to the target storage account.  The gold image can be removed after the VMs are deployed without any issue.  The Transfer VM can also be removed.  This could also be scripted but is not included in the current version of the template.  If you want to take a deeper look at creating a VM in this transfer model you can check out the quick start template on GitHub

Networking

This template also assumes that you already have a virtual network created and takes these as parameters to deploy the new virtual machines to this network.  The public IP addresses and NICs will all be attached to this network.  If you have different network requirements, you will need to make these changes before deployment.  In my demo environment, my domain controller is on the same vnet that the virtual machines will be deployed to.  Because of this, I have set my domain controllers to be the DNS servers and set up external forwarders there.  This ensures that the domain join request are routed to the domain controllers.  In other words, standard networking rules apply as if you were doing this on-prem.

Domain Join

The domain join function is performed by a new extension.  Previously it needed to be done through DSC.  I find this to be much smoother.  More information about the extension can be found here on GitHub.

The Business

Now, down to the code.  I know that is what everyone cares to see anyway.  If you want to download directly or make changes/comments, please do so through GitHub.


{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storageAccountName": {
"type": "string",
"metadata": {
"description": "Prefix name of the storage account to be created"
}
},
"vmCopies": {
"type": "int",
"defaultValue": 1,
"metadata": {
"descritpion": "Number of storage accounts to create"
}
},
"storageAccountType": {
"type": "string",
"defaultValue": "Standard_LRS",
"allowedValues": [
"Standard_LRS",
"Standard_GRS",
"Standard_ZRS",
"Premium_LRS"
],
"metadata": {
"description": "Storage Account type"
}
},
"vmName": {
"type": "string",
"metadata": {
"description": "Name prefix for the VMs"
}
},
"adminUserName": {
"type": "string",
"metadata": {
"description": "Admin username for the virtual machines"
}
},
"adminPassword": {
"type": "securestring",
"metadata": {
"description": "Admin password for virtual machines"
}
},
"dnsLabelPrefix": {
"type": "string",
"metadata": {
"description": "DNS Name Prefix for Public IP"
}
},
"windowsOSVersion": {
"type": "string",
"defaultValue": "2012-R2-Datacenter",
"allowedValues": [
"2008-R2-SP1",
"2012-Datacenter",
"2012-R2-Datacenter"
],
"metadata": {
"description": "The Windows version for the VMs. Allowed values: 2008-R2-SP1, 2012-Datacenter, 2012-R2-Datacenter."
}
},
"domainToJoin": {
"type": "string",
"metadata": {
"description": "The FQDN of the AD domain"
}
},
"domainUsername": {
"type": "string",
"metadata": {
"description": "Username of the account on the domain"
}

},
"ouPath": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "Specifies an organizational unit (OU) for the domain account. Enter the full distinguished name of the OU in quotation marks. Example: 'OU=testOU; DC=domain; DC=Domain; DC=com"
}
},
"domainJoinOptions": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "Set of bit flags that define the join options. Default value of 3 is a combination of NETSETUP_JOIN_DOMAIN (0x00000001) & NETSETUP_ACCT_CREATE (0x00000002) i.e. will join the domain and create the account on the domain. For more information see https://msdn.microsoft.com/en-us/library/aa392154(v=vs.85).aspx"
}
},
"existingVirtualNetworkName": {
"type": "string",
"metadata": {
"description": "Name of the existing VNET"
}
},
"subnetName": {
"type": "string",
"metadata": {
"description": "Name of the existing VNET"
}
},
"existingVirtualNetworkResourceGroup": {
"type": "string",
"metadata": {
"description": "Name of the existing VNET Resource Group"
}
},
"transferVmName": {
"type": "string",
"defaultValue": "TransferVM",
"minLength": 3,
"maxLength": 15,
"metadata": {
"description": "Name of the Windows VM that will perform the copy of the VHD from a source storage account to the new storage account created in the new deployment, this is known as transfer vm."
}
},
"customImageStorageContainer": {
"type": "string",
"metadata": {
"description": "Name of storace container for gold image"
}
},
"customImageName": {
"type": "string",
"metadata": {
"description": "Name of the VHD to be used as source syspreped/generalized image to deploy the VM. E.g. mybaseimage.vhd."
}
},
"sourceImageURI": {
"type": "string",
"metadata": {
"description": "Full URIs for one or more custom images (VHDs) that should be copied to the deployment storage account to spin up new VMs from them. URLs must be comma separated."
}
},
"sourceStorageAccountResourceGroup": {
"type": "string",
"metadata": {
"description": "Resource group name of the source storage account."
}
}
},
"variables": {
"storageAccountName": "[parameters('storageAccountName')]",
"imagePublisher": "MicrosoftWindowsServer",
"imageOffer": "WindowsServer",
"OSDiskName": "osdiskforwindows",
"nicName": "[parameters('vmName')]",
"addressPrefix": "10.0.0.0/16",
"subnetName": "Subnet",
"subnetPrefix": "10.0.0.0/24",
"publicIPAddressName": "[parameters('vmName')]",
"publicIPAddressType": "Dynamic",
"vmStorageAccountContainerName": "vhds",
"vmSize": "Standard_D1",
"windowsOSVersion": "2012-R2-Datacenter",
"virtualNetworkName": "myVNET",
"vnetID": "[resourceId(parameters('existingVirtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', parameters('existingVirtualNetworkName'))]",
"subnetRef": "[concat(variables('vnetID'),'/subnets/', parameters('subnetName'))]",
"customScriptFolder": "CustomScripts",
"trfCustomScriptFiles": [
"ImageTransfer.ps1"
],
"sourceStorageAccountName": "[substring(split(parameters('sourceImageURI'),'.')[0],8)]"
},
"resources": [
{
"name": "[concat(variables('storageAccountName'),copyIndex())]",
"copy": {
"count": "[parameters('vmCopies')]",
"name": "storagecopy"
},
"type": "Microsoft.Storage/storageAccounts",
"location": "[resourceGroup().location]",
"sku": {
"name": "[parameters('storageAccountType')]"
},
"apiVersion": "2016-01-01",
"kind": "Storage",
"properties": {}
},
{
"name": "[concat(variables('publicIPAddressName'),copyIndex())]",
"dependsOn": [
"storagecopy"
],
"apiVersion": "2016-03-30",
"copy": {
"count": "[parameters('vmCopies')]",
"name": "publicipcopy"
},
"type": "Microsoft.Network/publicIPAddresses",
"location": "[resourceGroup().location]",
"properties": {
"publicIPAllocationMethod": "[variables('publicIPAddressType')]",
"dnsSettings": {
"domainNameLabel": "[concat(parameters('dnsLabelPrefix'),copyIndex())]"
}
}
},
{
"name": "[parameters('transferVmName')]",
"dependsOn": [
"storagecopy"
],
"apiVersion": "2016-03-30",
"type": "Microsoft.Network/publicIPAddresses",
"location": "[resourceGroup().location]",
"properties": {
"publicIPAllocationMethod": "[variables('publicIPAddressType')]",
"dnsSettings": {
"domainNameLabel": "[concat(parameters('dnsLabelPrefix'),'trans1')]"
}
}
},
{
"apiVersion": "2016-03-30",
"copy": {
"count": "[parameters('vmCopies')]",
"name": "niccopies"
},
"type": "Microsoft.Network/networkInterfaces",
"name": "[concat(variables('nicName'),copyIndex())]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/',variables('publicIPAddressName'),copyIndex())]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses',concat(variables('publicIPAddressName'),copyIndex()))]" },
"subnet": {
"id": "[variables('subnetRef')]"
}
}
}
]
}
},
{
"apiVersion": "2016-03-30",
"type": "Microsoft.Network/networkInterfaces",
"name": "[parameters('transferVmName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/',parameters('transferVmName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',parameters('transferVmName'))]"
},
"subnet": {
"id": "[variables('subnetRef')]"
}
}
}
]
}
},

{
"comments": "# TRANSFER VM",
"name": "[parameters('transferVmName')]",
"type": "Microsoft.Compute/virtualMachines",
"location": "[resourceGroup().location]",
"apiVersion": "2015-06-15",
"dependsOn": [
"storagecopy",
"[concat('Microsoft.Network/networkInterfaces/', parameters('transferVmName'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[variables('vmSize')]"
},
"osProfile": {
"computerName": "[parameters('transferVmName')]",
"adminUsername": "[parameters('AdminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
},
"storageProfile": {
"imageReference": {
"publisher": "[variables('imagePublisher')]",
"offer": "[variables('imageOffer')]",
"sku": "[parameters('windowsOSVersion')]",
"version": "latest"
},
"osDisk": {
"name": "[parameters('transferVmName')]",
"vhd": {
"uri": "[concat('http://', variables('storageAccountName')[0], '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/',parameters('transferVmName'),'.vhd')]"
},
"caching": "ReadWrite",
"createOption": "FromImage"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('transferVmName'))]"
}
]
}
},
"resources": [
{
"comments": "Custom Script that copies VHDs from source storage account to destination storage account",
"apiVersion": "2015-06-15",
"type": "extensions",
"name": "[concat(parameters('transferVmName'),'CustomScriptExtension')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('transferVmName'))]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"autoUpgradeMinorVersion": true,
"typeHandlerVersion": "1.4",
"settings": {
"fileUris": [
"https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/201-vm-custom-image-new-storage-account/ImageTransfer.ps1"
]
},
"protectedSettings": {
"commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File ','ImageTransfer.ps1 -SourceImage ',parameters('sourceImageURI'),' -SourceSAKey ', listKeys(resourceId(parameters('sourceStorageAccountResourceGroup'),'Microsoft.Storage/storageAccounts', variables('sourceStorageAccountName')), '2015-06-15').key1, ' -DestinationURI https://', variables('StorageAccountName'), '.blob.core.windows.net/vhds', ' -DestinationSAKey ', listKeys(concat('Microsoft.Storage/storageAccounts/', variables('StorageAccountName')), '2015-06-15').key1)]"
}
}
}
]
},

{
"apiVersion": "2015-06-15",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat(parameters('vmName'),copyIndex())]",
"copy": {
"count": "[parameters('vmCopies')]",
"name": "vmcopies"
},
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Storage/storageAccounts/', variables('storageAccountName'),copyIndex())]",
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'),copyIndex())]",
"[concat('Microsoft.Compute/virtualMachines/', parameters('transferVmName'),'/extensions/',parameters('transferVmName'),'CustomScriptExtension')]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[variables('vmSize')]"
},
"osProfile": {
"computerName": "[concat(parameters('vmName'),copyIndex())]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
},
"storageProfile": {
"osDisk": {
"name": "[concat(parameters('vmName'),copyIndex(),'-osdisk')]",
"osType": "windows",
"createOption": "FromImage",
"caching": "ReadWrite",
"image": {
"uri": "[concat('http://', variables('StorageAccountName'), copyIndex(), '.blob.core.windows.net/',variables('vmStorageAccountContainerName'),'/Microsoft.Compute/Images/',parameters('customImageStorageContainer'),'/',parameters('customImageName'))]"
},
"vhd": {
"uri": "[concat('http://', variables('StorageAccountName'), copyIndex(), '.blob.core.windows.net/',variables('vmStorageAccountContainerName'),'/',parameters('vmName'),copyIndex(),'-osdisk.vhd')]"
}
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces',concat(variables('nicName'),copyIndex()))]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": "true",
"storageUri": "[concat('http://',variables('storageAccountName'),'.blob.core.windows.net')]"
}
}
}
},
{
"apiVersion": "2015-06-15",
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'),copyIndex(),'/joindomain')]",
"copy": {
"count": "[parameters('vmCopies')]",
"name": "domainextension"
},
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'),copyIndex())]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "JsonADDomainExtension",
"typeHandlerVersion": "1.3",
"autoUpgradeMinorVersion": true,
"settings": {
"Name": "[parameters('domainToJoin')]",
"OUPath": "[parameters('ouPath')]",
"User": "[concat(parameters('domainToJoin'), '\\', parameters('adminUserName'))]",
"Restart": "true",
"Options": "[parameters('domainJoinOptions')]"
},
"protectedsettings": {
"Password": "[parameters('adminPassword')]"
}
}
}
]
}

Are you using Azure Resource Manager Templates?  If so, we would love to hear about how you are using them in the comments below.  If you like this content and want to know how I work with Microsoft Partners, please check out the US Partner Community Blog for some of my other posts.  Don’t forget to follow me on Twitter.

 

 

 

When I started at Microsoft 18 months ago, I joined the National Partner Technology Strategist team focusing on the Azure Platform.  In my role as a National Partner Technology Strategist I was focused on three main areas with a national coverage responsibility: community, readiness, and practice development.  Because Azure is a platform and it is not possible to be an “expert” in all of Azure, Microsoft leadership recognized the need to focus the PTS on a narrow workload to better serve partners.

Microsoft made some organizational changed and consolidated all PTS resources into a single organization and then into Enablement Team Units.  These units are then further narrowed into workload specialization.  It is in this unit that I will be focusing my efforts with partners on Azure PaaS Services.  While Platform as a Service could cover almost everything on Azure, many are split under other workload areas.  My focus will be on: Logic Apps, Service Fabric, Cloud Services, Web Apps, API Apps, and Redis Cache.  There are also many underlying things that will also be important with these workloads including DevOps, Application Lifecycle Management, Containerization, Desired State Configuration, and more.

While I am sure that my passion for data and reporting will continue to endure, the focus of postings on this blog will likely reflect the time I am spending in these areas.  Is there anything in particular that you would like to see me cover about the topics above?  Let me know in the comments below.  If you like the content of this blog, follow me on Twitter where I share and discuss lots of similar content.

In my previous article, Building Azure Resource Manager Templates , I covered how to get started with Azure Resource Manager templates.  While they are certainly great for basic deployments, where they really shine is in their ability to allow for complex deployments.  This post will cover the Custom Script Extension and how they can be used to configure Virtual Machines during the deployment process.

Note: This article makes the assumption that you are familiar with the Azure portal and Visual Studio.  I am not writing a full step-by-step article.  While I will outline all of the things that need to happen, I am not doing a “click here” walk-through.

The Setup

When I was working on my ARM Template to deploy SQL Server 2016 with the AdventureWorks sample databases installed, I needed a way to configure the virtual machine once it was installed.  This is done using the Custom Script for Windows Extension.  It is dependent upon the creation of the virtual machine, as can be seen from the image below and requires that the virtual machine be created before adding the extension.

CustomScriptExtension

The Business

After adding the Custom Script Extension, a resource is added to the virtual machine in the ARM template with they type “extensions”.  The code can be seen below.  It shows up as nested in the JSON Outline window.  It also creates a customScripts folder in the solution.  In the case of a Windows extension this file is a PowerShell or .ps1 file.

{
name: test,
type: extensions,
location: [resourceGroup().location],
apiVersion: 2015-06-15,
dependsOn: [
[concat('Microsoft.Compute/virtualMachines/', parameters('Sql2016Ctp3DemoName'))]
],
tags: {
displayName: test
},
properties: {
publisher: Microsoft.Compute,
type: CustomScriptExtension,
typeHandlerVersion: 1.4,
autoUpgradeMinorVersion: true,
settings: {
fileUris: [
[concat(parameters('_artifactsLocation'), '/', variables('testScriptFilePath'), parameters('_artifactsLocationSasToken'))]
],
commandToExecute: [concat('powershell -ExecutionPolicy Unrestricted -File ', variables('testScriptFilePath'))]
}
}
}

From the custom script, I can perform a host of different actions based on PowerShell.  The code below performs a number of actions.  It creates a folder structure, downloads files, creates and executes a PowerShell function to extract the zip files, moves files, executes T-SQL, and opens firewall ports.

# DeploySqlAw2016.ps1
#
# Parameters

# Variables
$targetDirectory = "C:\SQL2016Demo"
$adventrueWorks2016DownloadLocation = "https://sql2016demoaddeploy.blob.core.windows.net/adventureworks2016/AdventureWorks2016CTP3.zip"

# Create Folder Structure
if(!(Test-Path -Path $targetDirectory)){
	New-Item -ItemType Directory -Force -Path $targetDirectory
	}
if(!(Test-Path -Path $targetDirectory\adventureWorks2016CTP3)){
	New-Item -ItemType Directory -Force -Path $targetDirectory\adventureWorks2016CTP3
	}
# Download the SQL Server 2016 CTP 3.3 AdventureWorks database files.
Set-Location $targetDirectory
Invoke-WebRequest -Uri $adventrueWorks2016DownloadLocation -OutFile $targetDirectory\AdventureWorks2016CTP3.zip

# Create a function to expand zip files
function Expand-ZIPFile($file, $destination)
{
$shell = new-object -com shell.application
$zip = $shell.NameSpace($file)
foreach($item in $zip.items())
{
$shell.Namespace($destination).copyhere($item)
}
}

# Expand the downloaded files
Expand-ZIPFile -file $targetDirectory\AdventureWorks2016CTP3.zip -destination $targetDirectory\adventureWorks2016CTP3
Expand-ZIPFile -file $targetDirectory\adventureWorks2016CTP3\SQLServer2016CTP3Samples.zip -destination $targetDirectory\adventureWorks2016CTP3

# Copy backup files to Default SQL Backup Folder
Copy-Item -Path $targetDirectory\AdventureWorks2016CTP3\*.bak -Destination "C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup"

# Restore SQL Backups for AdventureWorks and AdventrueWorksDW
Import-Module SQLPS -DisableNameChecking
cd \sql\localhost\

Invoke-Sqlcmd -Query "USE [master]
RESTORE DATABASE [AdventureWorks2016CTP3] FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\AdventureWorks2016CTP3.bak' WITH  FILE = 1,  MOVE N'AdventureWorks2016CTP3_Data' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorks2016CTP3_Data.mdf',  MOVE N'AdventureWorks2016CTP3_Log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorks2016CTP3_Log.ldf',  MOVE N'AdventureWorks2016CTP3_mod' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorks2016CTP3_mod',  NOUNLOAD,  REPLACE,  STATS = 5

GO" -ServerInstance LOCALHOST -QueryTimeout 0

Invoke-Sqlcmd -Query "USE [master]
	RESTORE DATABASE [AdventureworksDW2016CTP3] FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\AdventureWorksDW2016CTP3.bak' WITH  FILE = 1,  MOVE N'AdventureWorksDW2014_Data' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorksDW2016CTP3_Data.mdf',  MOVE N'AdventureWorksDW2014_Log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorksDW2016CTP3_Log.ldf',  NOUNLOAD,  REPLACE,  STATS = 5

	GO" -ServerInstance LOCALHOST -QueryTimeout 0

# Firewall Rules
#Enabling SQL Server Ports
New-NetFirewallRule -DisplayName “SQL Server” -Direction Inbound –Protocol TCP –LocalPort 1433 -Action allow
New-NetFirewallRule -DisplayName “SQL Admin Connection” -Direction Inbound –Protocol TCP –LocalPort 1434 -Action allow
New-NetFirewallRule -DisplayName “SQL Database Management” -Direction Inbound –Protocol UDP –LocalPort 1434 -Action allow
New-NetFirewallRule -DisplayName “SQL Service Broker” -Direction Inbound –Protocol TCP –LocalPort 4022 -Action allow
New-NetFirewallRule -DisplayName “SQL Debugger/RPC” -Direction Inbound –Protocol TCP –LocalPort 135 -Action allow
#Enabling SQL Analysis Ports
New-NetFirewallRule -DisplayName “SQL Analysis Services” -Direction Inbound –Protocol TCP –LocalPort 2383 -Action allow
New-NetFirewallRule -DisplayName “SQL Browser” -Direction Inbound –Protocol TCP –LocalPort 2382 -Action allow
#Enabling Misc. Applications
New-NetFirewallRule -DisplayName “HTTP” -Direction Inbound –Protocol TCP –LocalPort 80 -Action allow
New-NetFirewallRule -DisplayName “SSL” -Direction Inbound –Protocol TCP –LocalPort 443 -Action allow
New-NetFirewallRule -DisplayName “SQL Server Browse Button Service” -Direction Inbound –Protocol UDP –LocalPort 1433 -Action allow
#Enable Windows Firewall
Set-NetFirewallProfile -DefaultInboundAction Block -DefaultOutboundAction Allow -NotifyOnListen True -AllowUnicastResponseToMulticast True

By default the custom script is located in the solution but it does not have to be.  In the code example below, I actually call the script from GitHub.  Note the fileUris: link.

resources: [
{
name: deploySql2016Ctp3,
type: extensions,
location: [resourceGroup().location],
apiVersion: 2015-06-15,
dependsOn: [
[concat('Microsoft.Compute/virtualMachines/', parameters('Sql2016Ctp3DemoName'))]
],
tags: {
displayName: deploySql2016Ctp3
},
properties: {
publisher: Microsoft.Compute,
type: CustomScriptExtension,
typeHandlerVersion: 1.4,
autoUpgradeMinorVersion: true,
settings: {
fileUris: [
https://raw.githubusercontent.com/jgardner04/Sql2016Ctp3Demo/master/Sql2016Ctp3Demo/CustomScripts/deploySql2016Ctp3.ps1
],
commandToExecute: powershell.exe -ExecutionPolicy Unrestricted -File deploySql2016Ctp3.ps1
}
}
}
]

With this post we showed how we can create a virtual machine and customize it through the use of Azure Resource Manager templates.  In future posts we will explore how to expand the use of Azure Resource Manager templates to create complex services that include multiple Azure Resources and services.  Are you using Azure Resource Manager templates in your environment?  We would love to hear about it in the comments below.

If you like the content on my blog, I also blog on the US Azure and Data Analytics Partner Blogs.  I encourage you to check those out for more great resources. Also don’t forget to follow me on Twitter as much of what I talk about is related to Azure.