When choosing a platform to build an application for, developers need to consider a number of common factors – skillset in the market, end user reach of the platform, supportability, roadmap, and so on. This is one of the reasons why Windows Phone has had such a difficult time; development houses won’t choose to invest time into it because of limited user reach, uncertain roadmap and support, and the need to develop new skills. There’s just no incentive there to do so, and there is much risk.

Reaching Equilibrium

When I say application delivery platform, I refer to any of a number of areas, including:

  • Desktop Operating Systems
  • Mobile Operating System
  • Virtualisation
  • Cloud Platforms

In each of these areas, during their birth as a new delivery paradigm there tend to be many contenders vying for developer attention. I strongly believe that in any category, over time the number of commonly used and accepted platforms will naturally tend towards a low number as a small subset of them reach developer critical mass, and the others lose traction.

Once you reach this developer critical mass, a platform becomes self-perpetuating. End-user reach is massive, developer tools are well matured, roadmaps are defined, and needed development skills fill the market. Once a few platforms in a category reach this point, no others can compete as they can’t attract developers, the laggards wither and die, and the platforms in the category become constant.
.

I call this process HomeOStasis

  • In the Desktop and Server arena this has tended to Windows and *nix.
  • In the Mobile arena Android and iOS.
  • In Virtualisation land we predominantly have VMware and Hyper-V.
  • In Cloud there is currently AWS, Azure, and Google Cloud Platform.

It’s still a contentious view, both among hardware vendors and IT Pros as we haven’t quite reached that HomeOStatic point with cloud yet, but I can’t see cloud native as landing as anything other than AWS, Azure, and GCP. SoftLayer and Oracle do fit the bill, but in the certainty of eventual HomeOStasis, I don’t see them gaining the developer critical mass they need to become the core of the stable and defined cloud platform market.

 

Cloud and Virtualisation

Note that this isn’t Cloud vs Virtualisation, each are powerful and valuable application delivery platforms with their own strengths and weaknesses, and designed to achieve and deliver different outcomes, just like desktop and mobile operating systems.

Virtualisation is designed to support traditional monolithic and multi-tier applications, building resiliency into the fabric and hardware layers to support high availability of applications which can take advantage of scale-up functionality.

Cloud is designed to support containerised and microservice-based applications which span IaaS and PaaS and can take advantage of scale-out functionality, with resiliency designed into the application layer.

Yes you can run applications designed for virtualisation in a cloud-native environment, but it’s rarely the best thing to do, and it’s unlikely that they’ll be able to take advantage of most of the features which make cloud so attractive in the first place.

 

Hybrid Cloud and Multi Cloud

Today, the vast majority of customers I speak to say they are adopting a hybrid cloud approach, but the reality is that the implementation is multi cloud. The key differentiator between these is that in hybrid cloud the development, deployment, management, and capabilities are consistent across clouds, while in multi cloud the experience is disjointed and requires multiple skillsets and tools. Sometimes organisations will employ separate people to manage different cloud environments, sometimes one team will manage them all. Rarely is there an instance where the platforms involved in multi cloud are used to their full potential

Yes there are cloud brokerages and tools which purport to give a single management platform to give a consistent experience across multiple different cloud platforms, but in my opinion this always results in a diminished overall experience. You end up with a lowest-common-denominator outcome where you’re unable to take advantage of many of the unique and powerful features in each platform for the sake of consistent and normalised management. It’s actually not that different to development and management in desktop and mobile OS’s – there have always been comparisons and trade-offs between native and cross-platform tooling and development, with ardent supporters in each camp.

Today, the need to either manage in a multi cloud model, or diminish overall experience with an abstracted management layer is a direct consequence of every cloud and service provider today delivering a different set of capabilities and APIs, coupled with a very real customer desire to avoid vendor lock-in.

 

Enabling True Cloud Consistency

The solution to this has been for Microsoft to now finally deliver a platform which is consistent not just in look and feel with Azure, but truly consistent in capabilities, tooling, APIs, and roadmap. Through the appliance-based approach of Azure Stack, this consistency can be guaranteed through any vendor at any location.

This is true hybrid cloud, and enables the use of all the rich cloud-native capabilities within the Azure ecosystem, as well as the broad array of supported open-source development tools, without the risk of vendor lock-in. Applications can span and be moved between multiple providers with ease, with a common development and management skillset for all.

Once we have reached a point of HomeOStasis in Cloud, platform lock-in through use of native capabilities is not a concern either, as roadmap, customer-reach, skillset in the market, and support are all taken care of.

A little-discussed benefit of hybrid cloud through Azure Stack is the mitigation of collapse or failure of a vendor. An application which runs in Azure and Azure Stack can span multiple providers and the public cloud, protected by default from the failure or screw-up of one or more of those providers. The cost implications of architecting like this are similar to multi cloud, however the single skillset, management framework, and development experience can significantly help reduce TCO.

Azure Stack isn’t a silver bullet to solve all application delivery woes, and virtualisation platforms remain as important as ever for many years to come. Over and above virtualisation through, when evaluating your cloud-native strategy, there are some important questions to bear in mind:

  • Who do I think will be the Cloud providers that remain when the dust settles and we achieve HomeOStasis?
  • Do I want to manage a multi cloud or hybrid cloud environment?
  • Do I want to use native or cross-platform tooling?
  • What will common and desirable skillsets in the market be?
  • Where will the next wave of applications I want to deploy be available to me from?

I’m choosing to invest a lot of my energy into learning Azure and Azure Stack, because I believe that the Azure ecosystem offers genuine and real differentiated capability over and above any other cloud-native vendor, and will be a skillset which has both value and longevity.

When any new platform paradigm comes into being, it’s a complete roll of the dice as to which will settle into common use. We’re far enough along in the world of cloud now to make such judgements though, and for Azure and Azure Stack it looks like a rosy future ahead indeed.

When you deploy a new Azure Function, one of the created elements is a Storage Account Connection, either to an existing storage account or to a new one. This is listed in the ‘Integrate’ section of the Function, and automatically sets the appropriate connection string behind the scenes when you select an existing connection, or create a new one.

clip_image001

Out of the box however, this didn’t work correctly for me, throwing an error about the storage account being invalid.

clip_image002

Normally to fix this, we could just go to Function App Settings, and Configure App Settings to check and fix the connection string…

clip_image003

… however after briefly flashing up, the App Settings blade reverts to the following ‘Not found’ status.

clip_image004

There are a fair few ways to fix these existing App Settings connection strings, or just have them deployed correctly in the first place (e.g. in an appsettings.json file). In this instance though I’m going to fix the existing strings through PowerShell, as it’s always my preferred troubleshooting tool.

Fire up an elevated PowerShell window, and let’s get cracking!

  1. Ensure all pre-requisites are enabled/imported/added.

Assuming you have followed all the steps to install Azure PowerShell, which you must have in order to have App Service deployed… 🙂

From within the AzureStack-Tools folder (available from GitHub).

 
#
Import-Module AzureRM 
Import-Module AzureStack 
Import-Module .\Connect\AzureStack.Connect.psm1 

Add-AzureStackAzureRmEnvironment -Name "AzureStackUser" -ArmEndpoint "https://management.local.azurestack.external"

# Login with your AAD User (not Admin) Credentials 
Login-AzureRmAccount -EnvironmentName "AzureStackUser"
#
  1. Investigate the status of the App Settings in the Functions App.

The Functions App is just a Web App, so we can connect to it and view settings as we would any normal Web App.

The Function in question here is called subtwitr-func, and lives within the Resource Group subtwitr-dev-rg.

clip_image005

 
#
$myResourceGroup = "subtwitr-dev-rg" 
$mySite = "subtwitr-func" 
# 
$webApp = Get-AzureRMWebAppSlot -ResourceGroupName $myResourceGroup -name $mySite -slot Production 
# 
$appSettingList = $webApp.SiteConfig.AppSettings 

$hash = @{} 
ForEach ($kvp in $appSettingList)  
{ 
    $hash[$kvp.Name] = $kvp.Value 
} 

$hash | fl 
# 

Below is the output of the above code, which shows all our different connection strings. There are two storage connection strings I’ve tried to create here – subtwitr_STORAGE which I created manually and storagesjaohrurf7flw_STORAGE which was created via ARM deployment.

I’m not worried about exposing the Account Keys for these isolated test environments so haven’t censored them.

clip_image006

As neither of these strings contains explicit paths to the Azure Stack endpoints, they are trying to resolve to the public Azure endpoints. Let’s fix that for the storagesjaohrurf7flw_STORAGE connection.

 
#
$hash['storagesjaohrurf7flw_STORAGE']= 'BlobEndpoint=https://storagesjaohrurf7flw.blob.local.azurestack.external;TableEndpoint=https://storagesjaohrurf7flw.table.local.azurestack.external;QueueEndpoint=https://storagesjaohrurf7flw.queue.local.azurestack.external;AccountName=storagesjaohrurf7flw;AccountKey=MZt4gAph+ro/35qE+AbFEiE4NK6s5XVU/Y4JAi3p3l7yy1d3qx0QPETNl+bGW+fNNvtJHxSXI7TETBWKJw2oQA==' 
#
set-azurermwebappslot -ResourceGroupName $myResourceGroup -name $mySite -AppSettings $hash -Slot Production 
#

Now with the endpoints configured, the Function is able to connect to the Blob storage endpoint successfully and there is no more connection error.

Had I explicitly defined the connection string in-code pre-deployment, this would not have been an issue. If it is an issue for anyone, here at least is a way to resolve it until the App Settings blade is functional.

Below are a few quick tips to be aware of with the advent of the TP3 Refresh.

Once you have finished deployment, there is a new Portal Activation step

This has caught a few people out so far, as ever the best tip is to make sure you read all of the documentation before deployment!

clip_image001

When Deploying a Default Image, make sure you use the -Net35 $True option to ensure that all is set up correctly in advance for when you come to deploy your MSSQL Resource Provider.

.Net 3.5 is a pre-requisite for the MSSQL RP just now, and if you don’t have an image with it installed, your deployment of that RP will fail. It’s included in the example code in the documentation, so just copy and paste that and you’ll be all good.

 
 
$ISOPath = "Fully_Qualified_Path_to_ISO" 
# Store the AAD service administrator account credentials in a variable
$UserName='Username of the service administrator account' 
$Password='Admin password provided when deploying Azure Stack'|ConvertTo-SecureString -Force -AsPlainText 
$Credential=New-Object PSCredential($UserName,$Password) 
# Add a Windows Server 2016 Evaluation VM Image. Make sure to configure the $AadTenant and AzureStackAdmin environment values as described in Step 6 
New-Server2016VMImage -ISOPath $ISOPath -TenantId $AadTenant -EnvironmentName "AzureStackAdmin" -Net35 $True -AzureStackCredentials $Credential 
 

Deployment of the MSSQL Resource Provider Parameter Name Documentation is Incorrect

The Parameters table lists DirectoryTenantID as the name of your AAD tenant. This in actual fact requires an AADTenantGUID. This has been fixed via Git and should be updated before too long.

clip_image002

Use the Get-AADTenantGUID command in the AzureStack-Tools\Connect\AzureStack.Connect.psm1 module to retrieve this.

clip_image003

Deploy everything in UTC, at least to be safe.

While almost everything seems to work when the Azure Stack host and VMs are operating in a timezone other than UTC, I have been unable to get the Web Worker role in the App Service resource provider to deploy successfully in any timezone other than UTC.

UTC+1 Log

clip_image004

UTC Log

clip_image005

Well that’s it for now, I have some more specific lessons learned around Azure Functions which will be written up in a separate entry shortly.

During my TP3 Refresh deployment, I ran into an issue with the POC installer, wherein it seemingly wouldn’t download the bits for me to install and I ended up having to download each .bin file manually to proceed.

Charles Joy (@OrchestratorGuy) was kind enough to let me know via Twitter how to check the progress of download and for any errors. As ever, PowerShell is king.

clip_image001
To test this, I initiated a new download of the POC.

clip_image002
I chose a place to download to on my local machine, then started the download.

clip_image003
After starting the download, I fired up PowerShell and ran the Get-BitsTransfer | fl command to see what was going on with the transfer. In this instance, all is working perfectly, however something stuck out for me…

clip_image004

One thing to notice here is that Priority is set to Normal – this setting uses idle network bandwidth for transfer. Well I don’t want to use idle network bandwidth, I want to use all the network bandwidth! 🙂

We can maybe up the speed here by setting Priority to High or to Foreground. Set to Foreground, it will potentially ruin the rest of your internet experience while downloading, but it will move the process from being a background task using idle network bandwidth into actively competing with your other applications for bandwidth. In the race to deploy Azure Stack, this might be a decisive advantage! Smile

Get-BitsTransfer | Set-BitsTransfer -Priority Foreground

Kicking off this PowerShell immediately after starting the PoC downloader could in theory improve your download speed. As ever, YMMV and this is a tip, not a recommendation.

Sometimes when you embark on a new piece of research, serendipity strikes which just makes the job so much simpler than you’d imagined it to be.

In this case, there are already a series of GitHub examples for integrating Azure Media Services and Azure Blob Storage via Azure Functions. It’s heartening to know that my use case is a commonly enough occurring one to have example code already up for pilfering.

Azure Media Services/Functions Integration Examples

If we recall the application ‘design’ referenced in previous blogs, the ‘WatchFolder’ console application performs a very specific function – watching a blob storage container, and when it sees a new file of a specific naming convention appear (guid.mp4), it kicks off the Transcription application. The Transcription application moves the file into Azure Media Services, performs subtitle transcription, copies out the subtitles file, runs an FFMPEG job locally to combine the video and the subtitles, and then finally tweets out the resultant subtitled video.

Through exploration of the GitHub examples linked above, specifically the ‘100-basic-encoding’ example, I can actually completely get rid of the WatchFolder application, and move everything in Transcription up the FFMPEG job into a function.

This is by virtue of the fact that there are pre-defined templates from which functions can be built, and one of those is a C# Function which will run whenever a blob is added to a specified container. Hurrah! Literally just by choosing this Functions template, I have removed the need for a whole C# console app which ran within a VM – this is already valuable stuff.

clip_image001

Ok! So to get cracking with building out on top of the example function that looks to fit my use case, as ever we just hit the ‘Deploy to Azure’ button in the Readme.md, and start to follow the instructions.

clip_image002

Actually, before we continue, the best thing to do is to fork this project into my own GitHub repo to protect against code-breaking changes to the example repo. Just use the Fork button at the top of the GitHub page, and choose the where you want to fork it to. You’ll need to be signed into a GitHub account.

clip_image003

Now successfully forked, we can get on with deployment.

clip_image004

Enter some basic information – resource group, location, project you want to deploy etc. In this case, we’re taking the 100-basic-encoding function. Easy peasy!

clip_image005

Aaaaaand Internal Service Error. Well, if everything went smoothly, we’d never learn anything, so time to get the ‘ole troubleshooting hat on.

clip_image006

The problem here is a common one if you use a lot of accounts for testing in Azure. When we look at the GitHub sourcecontrols provider at https://resources.azure.com. This particular test account has never deployed from GitHub before, and so auth token is not appropriately set.

clip_image007

This is easily fixed in the Azure Portal. Open up your Functions App, select Function App Settings and then Configure Continuous Integration:

clip_image008

And then run through Setup to create a link to GitHub. This will kick off an OAuth process through to your GitHub account, so just follow the prompts.

clip_image009

After completing this and refreshing https://resources.azure.com/providers/Microsoft.Web/sourcecontrols/GitHub, the token now shows as set.

clip_image010

Excellent! Let’s redeploy 🙂

Hurrah! Success!

clip_image011

For no other reason than to show the consistency of approach between a traditional C# console application and a C# Azure Function, below I have pasted the bulk of the TranscribeVideo console app down to just above the FFMPEG kick-off directly alongside the out of the box Function example code with zero changes yet. It’s also rather gratifying to see that my approach over a year ago, and that taken in this Function have significant parallels 🙂

clip_image001[7]

Of course the example code is designed to re-encode an MP4 and output it into an output blob, whereas what we want is to run an Indexing job and then output the resultant VTT subtitles file into an output blob. This only takes a handful of tiny changes, made all the more easy by referencing my existing code.

With all the required tweaks to the example code – and they are just tweaks, no major changes – I have decommissioned a full console application, and migrated almost 80% of a second console application into a Functions app. This has exceeded my expectations so far.

Just for the avoidance of doubt, it all works beautifully. Below is a screenshot of the output log of the Function – it started automatically when I added a video file to the input container.

image

Above you can see the Function finding the video file Index.mp4, submitting it to Azure Media Services, running the transcription job, then taking the .vtt subtitles file and dropping it into the output container.

Here it is in Azure Storage Explorer:

image

So with that complete, I now need to look at how I encode the subtitles into the video and then tweet it. When I first wrote this many moons ago, it was significantly easier (or maybe actually only possible) to do this in an IaaS VM using FFMPEG to encode the subtitles into the video file. It looks like this might be a simple built-in function in Azure Media Services now. If that’s the case and it’s cost-effective enough, then I may be able to completely decommission the need for any IaaS, and migrate the entire application-set through into Functions.

I also want to change the function above to take advantage of the beta version of the Azure Media Indexer 2, as it suggests it should be able to do the transcription process significantly faster. If you look at the log file above, you’ll see that it took around 3 minutes to transcribe a 20 second video. If this can be sped up, so much the better.

So a few next steps to do, stay tuned for part 4 I guess! 🙂

image

 

So having made the decision to rewrite a console app in Azure Functions in my previous blog, I should probably explain what Azure Functions actually is, and the the rationale and benefit behind a rewrite/port. As ever there’s no point just doing something because it’s the new shiny – it has to bring genuine cost, time, process, or operational benefit.

Azure Functions is Microsoft’s ‘Serverless’ programming environment in Azure, much like AWS Lambda. I apostrophise ‘Serverless’, because of course it isn’t – there are still servers behind the scenes, you just don’t have to care about their size or scalability. It’s another PaaS (or depending on your perspective, an actual PaaS), this time for you to deliver your code directly into without worrying about what’s beneath.

 

image

 

You only pay for your code when it’s being executed, unlike when running in an IaaS VM where you’re being charged any time the VM is running. For code which only runs occasionally or intermittently at indeterminate times, this can result in pretty big savings.

Functions will automatically scale the behind-the-scenes infrastructure on which your code runs if your call rate increases, meaning you never have to worry about scale in/up/out/down of infrastructure – it just happens for you.

Functions supports a range of languages – PowerShell, Node, PHP, C#, and F#, Python, Bash, and so on. You can write your code in the Functions browser and execute directly from there, or your can pre-compile using your preferred environment and upload into Functions. The choice, as they say, is yours.

 

image

 

Well no, don’t. When you’re looking at Functions for Serverless coding, it’s just as vital that you understand the appropriate use cases and where you can gain real operational and financial benefit as it is when you’re evaluating Azure and Azure Stack for running certain IaaS workloads.

There are a number of appropriate use cases documented at the Functions page in Azure, for our purposes there are two which are of immediate interest. Timer-Based Processing, and Azure Service Event Processing.

Timer-Based Processing will allow us to have a CRON-like job which ensures we keep both our blob storage containers and our Azure Media Services accounts fairly clean, so we’re not charged for storage for stale data.

Azure Service Event Processing is the gem that will hopefully let us convert the WatchFolder app discussed in the previous blog post from C# console into running in Azure functions. This goal of this function will be to do exactly what the C# application did, except instead of watching a blob storage container constantly and needing a whole VM to run, it will automatically trigger the appropriate code when a new file is added into a blob storage container by the UWP app.

 

image

 

Which leads us neatly on to design consideration #1. In the previous generation, the two console apps existed in the same VM, and could just directly call each other to execute commands against. Now that the WatchFolder app is moving to Azure Functions, I need to re-think how it invokes the Transcription application.

A fairly recent addition to Functions is the ability to just upload an existing Console application into Functions and have it execute on a timer. This isn’t suitable for the whole WatchFolder app, however the sections which are responsible for timed clean-up of blob and AMS storage can be pretty easily split out and uploaded in this way.

For the part of the app which monitors for file addition to blob storage and invokes FFMPEG via the Transcription app, the way I see it with my admittedly mediocre knowledge, there are three vaguely sensible options:

    • Use the Azure Service Bus to queue appropriate data for the Transcription app to monitor for and pick up on and then invoke against.
    • Create an API app within Azure Stack which can be called by the Functions app and which invokes the Transcription app to run FFMPEG.
    • Write some custom code in the Transcription app to watch AMS for new subtitles files on a schedule, and kick off from there.

Honestly, I want to avoid writing as much custom code as possible and just use whatever native functionality I can, but Service Bus won’t be available in Azure Stack at GA, an API app is probably overkill here, and I can do the required job in a handful of lines of code within the Transcription app, so that’s the way I’ll probably go here. At least in the short term while I continue to figure out the art of the possible.

I should probably also note that Azure Media Services can do native encoding functionality itself so in theory there’s no need for me to do all this faffing around with IaaS and FFMPEG. For my purposes here though it is significantly more cost-effective to have an IaaS VM running 24/7 on-premises handling the encoding aspects, and use AMS for the transcription portion at which it excels. FFMPEG does also give me a lot more control over what I’m doing, which I’ve done a lot of tweaking of to get a consistently valid output for the Twitter API to accept without losing video quality.

Right, time to start porting elements across into Functions, ensure the overall app still works end to end, and see what we’ve learned from there!

clip_image001

 

I’ve just spent the last week in Bellevue at the Azure Certified for Hybrid Cloud Airlift, talking non-stop to a huge number of people about Cloud delivery practices, and beyond the incredible technology and massive opportunity that Azure Stack represents, my biggest takeaway from the week is that a lot of people still just don’t get it.

When Azure Stack launches, it will be the first truly hybrid Cloud platform to exist, delivering the same APIs and development experience on-premises and in a service provider environment as is available within the hyper-scale Cloud. It’s a unique and awesome product that loses all sense of differentiation as soon as people say ‘Great! So I can lift and shift my existing applications into VMs in Azure Stack! Then I’ll be doing Cloud!’

Well yes, you can, but you won’t be ‘doing Cloud’. If you have an existing enterprise application it was probably developed with traditional virtualisation in mind, and will probably still run most efficiently and most cost effectively in a virtualisation environment. Virtualisation isn’t going away any time soon, which is why we continue to put so much time and effort into the roadmaps of our existing platforms – most of the time these days it’s still the best place to put most existing enterprise workloads. Even if you drop it into Azure or Azure Stack, the application probably has no way of taking advantage of cloud-native features, so stick with the tried and proven world of virtualisation here.

If however you are developing or deploying net new applications, or are already taking advantage of cloud-native features, or can modernise your DB back end, or can take advantage of turn on/turn off, scale in/scale out type features, and want to bring those to a whole new region or market, then Azure and Azure Stack can open up a plethora of opportunity that hasn’t existed before.

So that’s all well and good to say, but what does modernising an existing application look like in practice? If we want to take advantage of buzzwords like infrastructure as code, serverless programming, containerisation and the like, where do we even begin.

Well it just so happens that I have an application I abandoned a while ago, predominantly due to annoyance at managing updates and dependencies, and with scaling out and in the application automatically as workloads wax and wane. If I write something and chuck it up on an app store, I really want it to maintain and manage itself as much as possible without taking over my life.

SubTwitr is an app I wrote about a year ago to address a pain point I had with Twitter, where I found I would never watch any videos in my feed as I just couldn’t be bothered turning up the volume to listen. I had the idea that I could leverage Azure Media Services to automatically transcribe and subtitle any video content I posted to Twitter, to ensure that at least people viewing my content wouldn’t have that pain. I considered commercialising it, but eventually archived it into GitHub and moved on as I didn’t really have the time to spend on the inevitable support at the time.

Let’s be clear as well, I’m not a pro dev by trade, I dabble in code in order to solve problems for myself, and have done for around 30 years now. I don’t necessarily follow good design patterns, but I do try to at least create code I can maintain over time, with good source control and comment structure.

This is the first app I’ve attempted to modernise using certain Cloud-native features, so is very much a learning experience for me – if I’m doing something stupid, please don’t hesitate to tell me!

Anyway! SubTwitr is comprised of two back end C# console applications which run in a Windows Server 2016 IaaS VM at brightsolid while leveraging Azure Blob storage and Media Services remotely, with a Windows 10 UWP front end application which will run on any Windows 10 device.

SubTwitr UWP App

clip_image002

There is currently no responsive layout built into the XAML, so it’d get rejected from the Windows Store anyway as it stands 🙂 We’re not here to build a pretty app though, we’re here to modernise back-end functionality!

The app is basic, it lets you choose a video, enter a Twitter message, and then post it to Twitter. At this point it authenticates you to the SubTwitr back end via OAuth, and uploads the video into an Azure Blob store along with some metadata – everything is GUIDised.

SubTwitr Console Apps

clip_image003

SubTwitr’s back end consists of two console apps – WatchFolder, and TranscribeVideo.

WatchFolder just sits and watches for a new video to be uploaded into an Azure Blob Store from the UWP app. When it sees a new video appear, it performs some slight renaming operations to prevent other SubTwitr processes trying to grab it when running at larger scale, and then kicks off the second console app.

TranscribeVideo does a little bit more than this…

  • It takes the video passed to it from WatchFolder, and sends it off to Azure Media Services for transcription.
  • AMS transcribes all of the audio in the video into text in a standard subtitle format, and then stores it in its media processing queue for collection.
  • TranscribeVideo watches for the subtitles appearing, and then downloads them and clears out the AMS queue so we don’t end up with a load of videos taking up space there.
  • TranscribeVideo kicks off an FFMPEG job to add the subtitles to the video in a Twitter accepted format, and at an acceptable size for Twitter to accept.
    • There are a few limitations with the Twitter API around size and length which need taken into account.
  • Twitter OAuth credentials are fetched from Azure KeyStore, and the Tweet is sent.
  • Once the Tweet has been successfully posted, Azure Mobile Services sends a push notification back to the UWP app to say that it’s done.
  • Video is cleaned up from the processing server and TranscribeVideo ends.

Note that WatchFolder can initiate as many instances of TranscribeVideo as it wants. Scalability limitations come in in a few areas though, I’ve listed some below and how I can address them using native Azure functionality.

  • VM Size
    • If a load of FFMPEG jobs are kicked off, the VM can become overloaded and slow to a crawl.
    • VM Scale Sets can be used to automatically deploy a new VM Instance if CPU is becoming contended. The code is designed to allow multiple instances to target the same Blob storage. It doesn’t care if they’re on one VM or multiple VMs.
  • Azure Media Services Indexer
    • AMS allows one task to run at a time by default, these are Media Reserved Units. You can pay for more concurrent tasks if desired.
    • A new version of this which performs faster has been released since I initially wrote SubTwitr, and is currently in beta. Sounds like a good thing to test!
  • Bandwidth
    • With a lot of videos flying back and forth, ideally we want to limit charges incurred here.
    • The most cost-effective route I have available is video into Azure Blob (free), Blob to AMS (free), AMS to brightsolid over ExpressRoute (‘free’), brightsolid to Twitter (‘free’).
  • Resource and Dependency Contention
    • I haven’t done any at-scale testing of running loads of TranscribeVideo and WatchFolder processes concurrently, however as they share dependencies and resources at the VM level, there exists the chance for them to conflict and impact each other.
    • Moving WatchFolder into Azure Functions, and containerising TranscribeVideo should significantly help with this.

Next Steps

So there we are, I have a task list to work through in order to modernise this application!

  • Rewrite the WatchFolder console app as an Azure Functions app which will run on Azure today, and on Azure Stack prior to GA.
  • Deploy the VM hosting TranscribeVideo as a VM Scale Set and set the laws for expansion/collapse appropriately.
  • Rewrite the Azure Media Services portions of TranscribeVideo to use the new AMS Indexer 2 Preview.
  • Containerise the TranscribeVideo application
  • Wrap the whole thing in an ARM template for simplified future deployment.

 

Right, time to get on with deploying my first Functions app – let’s see what the process is like, and what lessons we can learn.

 

There are often times in the technical previews of Azure Stack where you will need to collect logs to send back to the product teams. Fortunately, in TP3 this previously tedious process has been consolidated into a single command, as per Charles Joy in the Azure Stack forums:

  • Command: Get-AzureStackLogs

    Instructions:

    1. From the Azure Stack POC HOST…
    2. Run the following to import the required PowerShell module:

      cd C:\CloudDeployment\AzureStackDiagnostics\Microsoft.AzureStack.Diagnostics.DataCollection
      Import-Module .\Microsoft.AzureStack.Diagnostics.DataCollection.psd1

    3. Run the Get-AzureStackLogs command, with optional parameters (examples below):

      # Example 1 : collect all logs for all roles
      Get-AzureStackLogs -OutputPath C:\AzureStackLogs

      # Example 2 : collect logs from BareMetal Role (this is the Role where DEPLOYMENT LOGS are collected)
      Get-AzureStackLogs -OutputPath C:\AzureStackLogs -FilterByRole BareMetal

      # Example 3 : collect logs from VirtualMachines and BareMetal Roles, with date filtering for log files for the past 8 hours
      Get-AzureStackLogs -OutputPath C:\AzureStackLogs -FilterByRole VirtualMachines,BareMetal -FromDate (Get-Date).AddHours(-8) -ToDate (Get-Date)

      # If FromDate and ToDate parameters are not specified, logs will be collected for the past 4 hours by default.

    Other Notes about the Command:

    • Note that the command is expected to take some time for log collection based on which roles logs are collected for and the time duration for log collection, and the numbers of nodes of the MASD environment.
    • After log collection completes, check the new folder created under the OutputPath specified in command input C:\AzureStackLogs in the examples above
    • A file with Name Get-AzureStackLogs_Output will be created in the folder containing the zip files, and will include the command output which can be used in troubleshooting any failures in log collection.
    • Each role will have the logs inside a zip file.

One of the wonderful new additions to Azure Stack in Technical Preview 3 is Marketplace Syndication.

The Azure Marketplace offers VM Images with pre-installed software/config, VM Extensions, SaaS Applications, Machine Learning services, and Data Services.

With Marketplace Syndication in TP3, we are now able to directly pull a subset of VM Images from Azure into Azure Stack for consumption by tenants. For anyone who built and deployed Gallery items in Azure Pack, this is just glorious.

The Public Azure Marketplace offers five pricing models:

 

  • BYOL model: Bring your own licence. You obtain outside of the Azure Marketplace the right to access or use the offering and are not charged Azure Marketplace fees for use of the offering in the Azure Marketplace.
  • Free: Free SKU. Customers are not charged Azure Marketplace fees for use of the offering.
  • Free Software Trial (try it now): Full-featured version of the offer that is promotionally free for a limited period of time. You will not be charged Azure Marketplace fees for use of the offering during a trial period. Upon expiration of the trial period, customers will automatically be charged based on the standard rates for use of the offering.
  • Usage-based: You are charged or billed based on the extent of your use of the offering. For Virtual Machines Images, you are charged an hourly Azure Marketplace fee. For Data Services, Developer services and APIs, you are charged per unit of measurement as defined by the offering.
  • Monthly Fee: You are charged or billed a fixed monthly fee for a subscription to the offering (from date of subscription start for that particular plan). The monthly fee is not prorated for mid-month cancellations or unused services.
    Offer specific pricing details can be found on the solution details page on /en-gb/marketplace/ or within the Microsoft Azure classic portal.

As of just now in TP3, BYOL is the only model available, and only for a small subset of offerings. That doesn’t matter though, we’re just proving the concept just now, so that being the case, enabling Marketplace Management was the very first thing I did once I’d fired up my TP3 portal.

 

Registering the Resource Provider

When you click through to the Marketplace Management resource provider, it presents you with a link to follow in order to register and activate the resource provider. It needs to be registered against an existing Public Azure subscription in order to pull marketplace items down from hyperscale to on-prem.

Marketplace Management + Add from Azure NAME PUBLISHER TYPE VERSIO... STATUS You need to register and activate before you can start syndicating Azure Marketplace content. Follow instructions here to register and acitivate

The documentation to do this is available at the following link:

https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-register

A PowerShell script is required in order to register the resource provider, available from GitHub:

https://github.com/Azure/AzureStack-Tools/blob/master/Registration/RegisterWithAzure.ps1

And of course you need the AuzreRM PowerShell module installed, via Install-Module AzureRM

When registering the RP you are prompted for an Azure subscription, and an Azure username and password. This can be a completely separate subscription and username to the one used for Azure Stack deployment. It cannot, however, be a CSP subscription.

Run the script to completion…

Administrator: Windows PowerSheII ISE Eile Edit yew Tools Debug Add-ons Help RegisterWithAzure.ps1 X Ln256 Col 130 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 38 39 The script will follow four steps : Confi gure local identity: confi gures local Azure Stack so that it can call to Azure via your Azure subscription Get regi stration request: get local environment informatnon to create a registration for this azure stack in azure Register with Azure: uses Azure powershell to create an "Azure Stack Registratnon" resource on your Azure subscription Activate Azure Stack: final step in connecting Azure Stack to be able to call out to Azure . PARAMETER azuresubscriptionld Azure subscri ption ID that you want to regi ster your Azure Stack with. This parameter is mandatory. . PARAMETER azureDi rectory Name of your AAD Tenant which your Azure subscri ption is a part of. This parameter is mandatory. . PARAMETER azureSubscriptionOwner Username for the owner of the azure subscription. This user must not be an MSA or 2FA account. This parameter is mandatory. . PARAMETER azureSubscriptionPassword Password for the Azure subscription. You will be prompted to type in this password. This parameter is mandatory. . PARAMETER marketpl aceSyndi cati on Flag (ON/OFF) whether to enable downloading items from the Azure marketplace on this environment. Defaults to "ON" . V EKBU5E : VERBOSE: VERBOSE: STARTING : VERBOSE: STARTING : VERBOSE : VERBOSE: WARNING: STARTING : VERBOSE: VERBOSE : VERBOSE : VERBOSE: VERBOSE: VERBOSE: yrogress T 1 1 e: yrogress . new«egl srr at onxequesr. xm 1 Log file: • strationRequest. 2017-03—01. 23—08—49.0. log Invoking action : NewRegi strationRequest Action NewRegi strationRequest ' Action: Running action plan 'NewRegistrationRequest' . Step I - Create regi stration request - 3/1/2017 PM Step: Running step 1 — Create regi stration request — 3/1/2017 11:08:49 PM Task: Running interface • NewRegistrationRequest• of role Attempt #1. - 3/1/2017 PM Task: Interface 'NewRegi strationRequest' is not a supported standard Interface. - 3/1/2017 PM Task - NewRegistrati onRequest Interface: Path to module: psm1 — 3/1/2017 11:08 PM Interface: Running interface NewRegistrationRequest psml, AzureBridge:NewRegistrationRequest) Runtime parameters are present, will use provided Bridge App configuration file — 3/1/2017 11:08:52 PM Bridge appliation object id : 22280345—84f4—469e-80dO-dcbb7f2ec91b - 3/1/2017 11 :08:52 Performing the operation "Create File" on target "Destination: json" - 3/1/2017 Registration request contents: { "Br i dgeAp$-onfi gOi d" "22280345-84f4-469e-80do-dcbb7f2ec91b" , ' RegionNames • " local " , "Identi tyProvi d er " : "Azur eAD" } - 3/1/2017 PM "Servi cePri ncipal Name" . azurestack. local /b20ffdea—f632—433c-b39d-6ba972192cac" , "Depl oymentldent i fi er" : • b20ffdea—f632 , "https : // azur e. - 3/1/2017 VERBOSE : VERBOSE : COMPL ETE : VERBOSE: COMPLETE : VERBOSE: VERBOSE : COMPL : VERBOSE: Registration request output file is at : json - 3/1/2017 11 Interface: Interface NewRegi strati onRequest completed. - 3/1/2017 11:08:52 PM Task - NewRegistrati onRequest Task: Task compl eted. - 3/1/2017 PM Step 1 - Create regi stration request Step: Status of step 'I — Create registration request' is 'Success' 3/1/2017 PM Action: Action plan 'NewRegistrati onRequest' completed. — 3/1/2017 11:08:52 PM Action NewRegi strationRequest ' New—RegistrationRequest. PSI : END on AS-HOST as AZURESTACK\azurestackadmin STEP 2: Registration request completed. Re—enter your Azure subscription credentials in the Running script/ selection. Press Ctrl* Break to stop. Press Ctrl—B to break into debugger, next step. Press Enter to conti nue. . 100%

' i dgeServi ce. Partiti onConnecti onStr i ng ZB/fs r oynNTJXm1 YBSW40; Pool i ng=Fa1 se" , 'Data Catalog—Microsoft. AzureStack.Connerce;User 'TokenRetri ever . Certifi cateThumbpri nt" : '9794C97694AF2B26ßFE264C14D39D9D2A5571838" , 'TokenRetri ever . Cl i entld" : ' ce8c5ed0-745 a-448c-82c1-117c62f7c348" , 'UsageUpIoader . GatewayUri " " https : //azstusage. tr affi cmanager. net/usage" , 8r i dgeJ obRunn er . Stor ageC1i entld" : " 6b5b2d0522f449d1b021afS35 30294cf" 'TokenRetri ever . ResourceUri • • 'https : //mi crosoft. onmi crosoft. com/azurestackusage" 3/1/2017 11:11 PM VERBOSE: VERBOSE: VERBOSE : VERBOSE : VERBOSE : VERBOSE: COMPL ETE : VERBOSE: COMPL ETE : VERBOSE : VERBOSE : COMPL ETE : VERBOSE: Creating remote power-shell session on MAS—WASOI - 3/1/2017 11 : 11:41 PM Initializing remote powershell session on MAS—WASOI with common functions. - 3/1/2017 11:11 PM Loading infra vm helpers PSI) to session on MAS-WASOI — 3/1/2017 11:11:41 PM Invoking command on remote session.. - 3/1/2017 11:11:41 PM Removing remote power-shell session on MAS—WASOI. 3/1/2017 PM Interface: Interface Configure completed. — 3/1/2017 11 PM Task cRi ngServi ces\UsageBri dge — Confi gure Task: Task compl eted. - 3/1/2017 11:11:42 Step 1 - Configure Usage Bridge Step: Status of step 'I — Configure Usage Bri dgeT is 'Success - 3/1/2017 11:11 PM Action: Acti on plan 'ConfigureUsageBridge' completed. - 3/1/2017 11:11 PM Action 'Confi gureUsage8ridge Activate—Bridge.psI : END on AS-HOST as AZURESTACK\azurestackadmin STEP 4: Activate Azure Stack compl eted Registration compl ete. You may now access Marketpl ace Management in the Admin UI PS dge»

… and all should be well! You can now refresh the Marketplace Management resource provider, to be presented with a new message and an ‘Add from Azure’ button. Yay!

Marketplace Management + Add from Azure NAME PUBLISHER TYPE VERSIO... STATUS You have no items downloaded to your Azure Stack marketplace yet. Click "Add from Azure" to add items.

The available list is currently quite small, but pretty much everything is useful, so kudos on the choices there Microsoft!

Simply select what you want to bring into your Azure Stack, and click download. One thing I noticed is that the transfers were pretty slow, even on our ridiculously fast connections. Pulling down a handful of gallery images had to be left running overnight.

Microsoft Azure Stack V Marketplace Management VERSIO... STATUS > Add from Azure asadmin@brightsolid.... >< Add from Azure NAME TYPE lick " Add from Azure" to add items. O (ț) (â) (Â) Remote Desktop Services (RDS) Basic Farm SQL Server 2014 SPI Express on Windows Server ; SQL Server 2016 RTM Developer on Windows Ser GitLab LAMP Magento Moodle Nginx ownCIoud Redmine Ruby WordPress Drupal PUBLISHER Microsoft Microsoft Microsoft Bitnami Bitnami Bitnami Bitnami Bitnami Bitnami Bitnami Bitnami Bitnami Bitnami TYPE Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Virtual Machine Machine M achine Ma chine Ma chine Ma chine Machine Machine Machine Machine Machine Machine Machine VERSIO... 1.0.0 1.0.0 1.0.0 8.9.61 5.6.270 2.1.20 3.1.22 1.10.14 9.1.11 3.3.10 2.3.15 4.6.14 8.1.90 SIZE 8.5G 18.3G 23.4G 30.OG 30.OG 30.OG 30.OG 30.OG 30.OG 30.OG 30.OG 30.OG 30.OG

Downloading… waiting… downloading…

Microsoft Azure Stack v Marketplace Management > Add from Azure > Drupal > Word Press c Marketplace Management + Add from Azure Search to filter items... NAME (4) Ruby WordPress Drupal PUBLISHER Bitnami Bitnami Bitnami TYPE Virtual Machine Virtual Machine Virtual Machine VERSIO... 2.3.15 4.6.14 8.1.90 STATUS Downloadi... Downloadi... Downloadi...

Five wonderful marketplace items added and ready for tenant consumption! Amazing.
Microsoft Azure Stack v Marketplace Management Marketplace Management Add from Azure Search to filter items„. LAMP Ruby Word Press e Remote Desktop Services (ROS) Basic Farm SQL Sewer 2014 SPI Express on Windows Server PUBLISHER Bitnami Bitnami Bitnami M icrosoft M icrosoft Virtual Machine Virtual Machine Virtual Machine Virtual Machine Virtual Machine VERSIO... 5.6270 2.3.15 4614 1.00 1.00 STATUS Succeeded Succeeded Succeeded Succeeded Succeeded

Tenants can now select these Marketplace items, and deploy them immediately. This is such a leap forward from Azure Pack, and I feel such joy in using this feature. How important this is cannot be overstated.

Microsoft Azure Stack New p Search the marketplace MARKETPLACE Tenant Offers + Plans Virtual Machines Data + Storage Networking Custom Security + Identity Developer Services Web + Mobile Management Media + CDN RECENT New > Media + CDN See all Media + CDN FEATURED APPS WordPress ee al The most popular and ready-to-go

… and here we are! A WordPress VM deployed using an image from Public Azure, all controlled and managed from within the Azure Stack web UI – no PowerShell, not building VM images, all just so simple. Phenomenal.
Microsoft Azure Stack WPI Search (CtH+,9 Ove rview Activty log Access control (IAM) Tags SETTINGS WPI Start Connect Essentials Resource group (c'.ge) wp-dev-rg Running Location Dundee Subscription name (change) Default Provider Subscription Subscription ID Restart Stop Delete ace2128b-1464-4gc2-8b1 d -oggc2ba420bd Computer name Operating system Linux Standard Al (1 core, 1.75 G8 memory) Public IP address/DNS name label 192.168.102.13/«none» Virtual network/subnet wp-dev-rg-vnet/default

This is a bit of a non-blog, as the TP3 deployment experience is utterly joyous. It deployed first try for me, taking around four hours from start to completion, with no errors logged. I happened to screenshot the process, so here it is in all its glory 🙂

In order to deploy the PoC, I followed the documentation at https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-deploy

Download and run the PoC Downloader. It’s highly recommended that you tick the ‘Download the Windows Server 2016 EVAL (ISO)’ box so you can get something added to the marketplace once it’s deployed.

-Z Azure Stack POC Downloader Download the Azure Stack Proof of Concept (POC) Notice Privacy Notice Microsoft automatically collects usage and performance data over the internet ("Data"). This feature is on by default. You can turn off this feature by following the instructions http://go_microsott.com/fwIink/?LinkID=699582_ For more information about Data collection please see Microsoft Azure Stack POC Privacy Statement at http://go.microsoft.com/fwIink/?LinkID=692023_ 1. Read the deployment prerequisites 2. Choose a build @ Technical Preview (release build) version 20170225.2 C) Technical Preview (in-development build) What's the difference? 3. Optional: Windows Server 2016 EVAL Download the Windows Server 2016 EVAL (ISO) Once Azure Stock has been deployed, this ISO can be used to add an image to the Azure Stack Marketplace. Note: This Windows Server Evaluation image may only be used with Microsoft Azure Stack Proof of Concept and is subject to the Microsoft Azure Stack Proof of Concept license terms. 4. Browse to where you want to save the build Space required: 15.07 Ga Download 5. Cancel

The PoC Downloader will download the PoC.

Azure Stack POC Downloader Download the Azure Stack Proof of Concept (POC) The download is in progress. Downloading Microsoft Azure Stack POC (release build) version 20170225.2. Transferred: 9.27 GB Remaining: 5.8 Ga Estimated time left: Cancel Details... Privacy Notice Pause

You will need to have at least 85GB of storage to extract the downloaded files, which you can extract by clicking the ‘Run’ button once download has completed.

Azure Stack POC Downloader Download the Azure Stack Proof of Concept (POC) Privacy Notice The download has completed and was saved here. Close the window to exit, or click Run to run the Azure Stack POC self-extractor. The Azure Stack POC self-extractor will guide you through the next steps of the installation process. Run

The extractor extracts a VHDX file from which we will boot and run a whole Azure Stack environment. One file to worry about – so simple, even I can’t mess it up.

Setup - Microsoft Azure Stack POC Please vvait while Setup extracts Microsoft Azure Stack POC on your computer. Extracting files. C : user a tor DesktopVvIicr o so ft Azure Stack POC CloudBuiIder. vhdx

Setup - Microsoft Azure Stack POC Completing the Microsoft Azure Stack POC Setup Wizard Setup has finished extracting Microsoft Azure Stack POC on your compu ter Click Finish to exit Setup.

Once extracted, copy the CloudBuilder.vhdx file to the root of your host’s C: drive.

The documented PowerShell below will download preparatory files you need, so blindly copy it into PowerShell ISE and run it 🙂

 

# Variables
$Uri = 'https://raw.githubusercontent.com/Azure/AzureStack-Tools/master/Deployment/'
$LocalPath = 'c:\AzureStack_SupportFiles'

# Create folder
New-Item $LocalPath -type directory

# Download files
( 'BootMenuNoKVM.ps1', 'PrepareBootFromVHD.ps1', 'Unattend.xml', 'unattend_NoKVM.xml') | foreach { Invoke-WebRequest ($uri + $_) -OutFile ($LocalPath + '\' + $_) }

The next step is the same as TP2, run the PrepareBootFromVHD PowerShell script to set the BCDBoot entry to allow the host to reboot into the CloudBuilder VHDX. Apply an Unattend file if you don’t have console access to the host. Or don’t, I’m not your boss.

Administrator: Windows PowerSheII PS C: DIR Di rectory: C: \ Azu restack_SupportFi les Mode LastWriteTime 01/03/2017 01/03/2017 01/03/2017 01/03/2017 17 17 17 17 Length 2611 8698 1684 3788 Name BootMenuNoKVM. PSI PrepareBootF romVHD. PSI Unattend . xml unattend NOKVM. xml PS . \prepareBootFromVHD. PSI -CloudBui1derDi skpath "c: oudBui1der. vhdx" -Applyunattend password for the local administrator account of the Azure Stack host. g password will be configured for the local administrator Requi res 6 characters minimum: account of the Azure Stack host: rea Ing new boot entry for Cloud8ui1der. vhdx Running command: bcdboot I : \windows Boot files successfully created. updating descri ption for the boot entry Descri pti on: Wi ndows Server 2016 Devi ce: partiti on—I: bcdedit /set " {default}" description "Azure Stack" The operation completed successfully. Confi rm Are you sure you want to perform this action? performing the operation "Enable the Local shutdown (AS-Base) " [Y] Yes [A] Yes to All [N] NO [L] NO to All access rights and Suspend [?] Help restart the (default is computer. " on target "local host

Once you’ve rebooted into the CloudBuilder VHDX and logged in using the password you provided when applying the Unattend file, run through the same steps as you would have in TP2.

If not using DHCP, set a static IP on the host.

If you’re anywhere other than UTC-8, set a time server.

Rename the host.

Reboot.

Disable all NICs other than the NIC that provides internet connectivity.

Actually I haven’t validated the last step – it was necessary in TP1 and TP2, but I’m pretty certain I saw the deployment script checking for the correct NIC to use while it was installing. Let’s check…

326 327 328 329 330 331 Car ray) Sn etwor kConfi gur ati on if (Sn etwor kConfi gur ati on . Count Get - NetlPConfi gur ation 1) -gt . NetAdapter . Status - EQ thr ow SLocaI i zedData. Mor eThanOneNi cEnabI ed

Yep, DeploySingleNode.ps1, lines 326 to 331 – only one NIC is allowed to be enabled still, so let’s disable all the other NICs.

Network Connections Control Panel Organize NICI Disabled Network and Internet Disabled Network Connections Search Network Connecti Network Intel(R) Gigabit X520/13iO rNDC NIC4 Disabled Intel(R) Gigabit X520/13iO rND... Intel(R) Ethernet IOG4PX520/13SO.„ SLOT 2 Di sabled Mellanox ConnectX-3 Pro Etherne... Intel(R) Ethernet IOG4PX520/13SO... SLOT 2 2 Disabled Mellanox ConnectX-3 Pro Etherne...

Ok! So in this environment I’ve not got DHCP available so we need to set a Static IP, for this lab I’m using 10.20.39.124. Here are the steps to kick off deployment from an elevated PowerShell window. NOTE: Do not use PowerShell ISE for this – if you do, it may lead to fuckery.

cd C:\CloudDeployment\Setup

$adminpass = ConvertTo-SecureString “Local Admin Password” -AsPlainText -Force

$aadpass = ConvertTo-SecureString “Azure AD Account Password” -AsPlainText -Force

$aadcred = New-Object System.Management.Automation.PSCredential (“AAD account email address“, $aadpass)

.\InstallAzureStackPOC.ps1 -AdminPassword $adminpass -InfraAzureDirectoryTenantAdminCredential $aadcred -NatIPv4Subnet 10.20.39.0/24 -NatIPv4Address 10.20.39.124 -NatIPv4DefaultGateway 10.20.39.1 -TimeServer sometimeserver

This is a slight change from TP2, with -AADCredential being renamed to -InfraAzureDirectoryTenantAdminCredential, which just rolls off the tongue :/

Deployment kicks off, and you pretty much wait for four hours. This is also a slight change from TP1 and TP2, with the ‘Cross Fingers and Pray to the Old Gods and the New’ step now being notably absent as everything just works.

Administrator: Windows PowerSheII RARNING: T e names o some 1 mporte comman s rom t e 110 u •e Fa r 1 cR1ngApp11cat10ns Inc •u e unapprove ver s t at ight make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with Action ' Depl oyment ' Running action plan.. Phase O - Confi gure physical machine and external networking Step O - Deploy and confi gure physical machines, 8GP and NAT Task Cloud - Depl oyment- Phas eo- Depl oy8ar etal And8GPAndNAT Running action Action ' Depl oyment-PhaseO-DepI And8GPAndNAT ' Running action plan. 1000000000000 (DE?) Validate Physical Machi nes step 0.12 - Validating the hardware and OS confi guration on the physical nodes . Task CI - Validate Running interface t e Ver ose parameter. For a 1 st approve ver s , type Get-Ver . 3 1 2017 ARNING: The names of some imported commands from the module 'ACS8cdr' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module connand again with the Verbose parameter. For a list of approved verbs, type Get-Verb. - 3/1/2017 ARNING: The names of some imported commands from the module 'ACS' include unapproved verbs that might make then less discoverable. To find the connands with unapproved verbs, run the Import-Module conmand again with the Verbose parameter. For a list of approved verbs, type Get-Verb. - 3/1/2017 ARNING: The names of some imported connands from the module '800tArtifactsHeIper' include unapproved verbs that might ake them less discoverable. To find the commands with unapproved verbs, run the Import-Module connand again with the erbose parameter. For a list of approved verbs, type Get-Verb. — 3/1/2017 5 : 44: 46 ARNING: The names of some imported commands from the module 'JeaHeIper' include unapproved verbs that might make them less discoverable. To find the connands with unapproved verbs, run the Import-Module conmand again with the Verbose parameter. For a list of approved verbs, type Get-Verb. — 3/1/2017 5 : 44: 46 ARNING: The names of some imported connands from the module 'UpdatePhysicaIMachineHeIper' include unapproved verbs hat might make then less discoverable. To find the commands with unapproved verbs, run the Import-Module connand again with the Verbose parameter. For a list of approved verbs, type Get-Verb. — 3/1/2017 5 : 44: 46 ARNING: The names of some imported commands from the module 'UpdateNC' include unapproved verbs that might make them less discoverable. To find the connands with unapproved verbs, run the Import-Module conmand again with the Verbose parameter. For a list of approved verbs, type Get-Verb. — 3/1/2017 5 : 44: 46 ARNING: The names of some imported connands from the module 'PhysicalMachines' include unapproved verbs that might ake them less discoverable. To find the commands with unapproved verbs, run the Import-Module connand again with the erbose parameter. For a list of approved verbs, type Get-Verb. — 3/1/2017 5 : 44: 46 ER80SE : ER80SE : ER80SE : ER80SE : ARNING: ER80SE : ER80SE : Setting IP addresses. — 3/1/2017 5 : 44: 46 Normalizing MAC address '24-6E-96-02-46-2C' . — 3/1/2017 5 : 44: 46 Choosing an address from 'Management' in range '192.168. 200. 65/24' . — 3/1/2017 5 : 44: 46 Find out which NICs are able to connect on each node. - 3/1/2017 Ping to 192.168.100.4 failed Status: TimedOut - 3/1/2017 PM - AS-HOST storagel - 3/1/2017 PM + AS-FOST Deployment - 3/1/2017 5:45 PM
Action ' Depl oyment ' Running action plan. . Step O - phase O - Configure physical machine and external networking Deploy and configure physical machines, BGP and NAT Task Cloud Dep 1 oyment -phas eO-Dep 1 oyBareMeta1 AndBGPAndNAT Running action Action ' Depl oyment-phaseO-Dep1 oyBareMeta1AndBGPAndNAT ' Running action plan. Coooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo Step 0.20 (S TO) Configure Storage Cluster Create storage cluster, create a storage pool and file server. Task Cloud \ Infrastructure\storage Depl oyment Runni ng NARNING: The names of NARNING: The names of Import-Module command NARNING: The names of NARNING: The names of NARNING: The names of NARNING: The names of Import-Module command NARNING: The names of Import-Module command NARNING: The names of NARNING: The names of interface some imported commands from the module 'FabricRingApp1ications ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 PM some imported commands from the module 'ACSB10b ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 PM some imported commands from the module 'FabricRingApp1ications ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM some imported commands from the module 'FabricRingApp1ications ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM some imported commands from the module 'FabricRingApp1ications ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM some imported commands from the module 'ACSBcdr' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM some imported commands from the module 'ACS' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM some imported commands from the module 'BootArtifactsHe1per' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 PM some imported commands from the module 'JeaHe1per' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM NARNING: The names of some imported commands from the module 'Updatephysica1MachineHe1per' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM NARNING: The names of some imported commands from the module 'UpdateNC' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM NARNING: The names of some imported commands from the module 'physicalMachines ' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs , run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-verb. 3/1/2017 6:27 PM COMPL : COMPL : STARTING : STARTING : NARNING: NARNING: NARNING: COMPL : COMPL : STARTING : STARTING : COMPL : COMPL : STARTING : STARTING : NARNING: NARNING: NARNING: Task Confi gure Step 0.17 (DEP) Confi gure physi cal Machi ne Step 0.18 - (DEP) Confi gure physi cal Machi ne Task [AS-HOST] : [AS-HOST] : [AS-HOST] : Task updateHostComponent C CPDTVMSwi tch] publ i cswi tch] C CPDTVMSwi tch] publ i cswi tch] C CPDTVMSwi tch] publ i cswi tch] updateHostComponent Net Net Net adapters adapters adapters are are are down : down : down : 3/1/2017 PM 3/1/2017 6: PM 3/1/2017 PM Step 0.18 - (DEP) Confi gure physi cal Machi ne Step 0.19 - (FBI) Confi gure powershell JEA for Storage. Task Cloud \ Fabri c\JEA — Confi gure Task - Confi gure Step 0.19 - (FBI) Confi gure powershell JEA for Storage. Step 0.20 - (S TO) Confi gure Storage Cluster Task Depl oyment Cluster validation completed, but had a few tests either unselected/ cancelled/ deemed not applicable. Refer to the validation report for more information 3/1/2017 PM There were issues while creating the clustered role that may prevent it from starting. For more information view the report file below. 3/1/2017 PM Report file location: C: Cluster Wizard S—cluster on 2017 .03. 01 At 18. 33. 06. htm 3/1/2017 PM
Administrator: Windows PowerSheII VERBOSE : Inter ace: Runmng 1 nter ace Ml grate c asses ECEsee RI nq ECEsee RI ng .psml, ECEsee R 1 rig : Ml grate 3 1 2017 PM ARNING: The names of some imported commands from the module 'ECE incl ude unapproved verbs that mi ght make them less di scoverable. Import-Module command again with the verbose parameter. For a list of approved verbs, type Get-verb. - 3/1/2017 PM TO find the commands with unapproved verbs , run the ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ARNING : ERBOSE : ERBOSE: ERBOSE : ERBOSE : ARNING : ERBOSE: ERBOSE : ERBOSE : ERBOSE : ERBOSE: ERBOSE : ERBOSE : ERBOSE : ERBOSE: ERBOSE : ERBOSE : ERBOSE : ERBOSE: ERBOSE : ERBOSE : ERBOSE : ERBOSE: COMPLETE : ERBOSE : COMPLETE : ERBOSE : STARTING : ERBOSE • ERBOSE : STARTING : ERBOSE : ERBOSE : ERBOSE : ERBOSE : ERBOSE : Loading module from path oyment\ECEngi oudEngine.cmd1ets.d11 ' - 3/1/2017 PM Importing cmdlet 'set-Ecesecret' . - 3/1/2017 PM Importing cmdlet 'Get-Ececonfi guration' . - 3/1/2017 PM Importing cmdlet 'Get-Actionprogress ' - 3/1/2017 PM Importing cmdlet 'Get-JsonTemp1ate' . - 3/1/2017 PM Importing cmdlet 'Invoke-EceAction ' - 3/1/2017 PM Importing cmdlet ' Joi n-R01eTemp1ate ' - 3/1/2017 PM Importing cmdlet 'Import-Ececustome rconfiguration ' - 3/1/2017 PM - 3/1/2017 PM Importing cmdlet 'set-RoleDefinition ' Attempting to retrieve BareMeta1Admin credential ... - 3/1/2017 10:24:55 PM unable to retri eve BareMeta1Admin credential . It may not exist. - 3/1/2017 PM Exception calling "Getcredential " with "1" argument (s) : contains no elements' sequence Attempting to retrieve AADAdmin credential .. - 3/1/2017 Attempting to retrieve AADAzureT0ken credential ... - 3/1/2017 PM Attempting to retrieve AzureBri dgeuser credential ... - 3/1/2017 PM unable to retri eve AzureBridgeuser credential . It may not exist. - 3/1/2017 10:24:55 PM Exception calling "Getcredential " with "1" argument (s) : sequence contains no elements' Attempting to retrieve LocalAdmin credential ... - 3/1/2017 10:24:55 PM Attempting to retrieve MgmtLoca1Admin credential ... - 3/1/2017 10:24:55 PM Attempting to retrieve CAcertifi cateuser credential . - 3/1/2017 10:24:55 PM - 3)i/2017 10:24:55 PM Attempting to retrieve Domai nAdmin credential ... Attempting to retrieve Fabric credential ... - 3/1/2017 10:24:55 PM Attempting to retrieve sq 1 service credential ... - 3/1/2017 10:24:55 PM Migrating cloudDefinition to ECEservi ce... - 3/1/2017 10:24:55 PM - 3/1/2017 PM - 3/1/2017 10:24:55 PM Initializing remote powershell session on MAS-ERCS01.AzureStack.Loca1 with common functions. - 3/1/2017 10:24:57 PM Loading infra vm helpers . PSI) to session on MAS-ERCS01.AzureStack.Loca1 - 3/1/2017 10:24:57 PM Migration of cl oudDefinition to ECEservnce completed successfully. - 3/1/2017 10:25:10 PM Migrating ECELite to AD VMS. . . - 3/1/2017 10:25:10 PM copying cloudDep10yment Files to AD VM... - 3/1/2017 10:25:10 PM comed cloudDep10yment Files to AD VM. - 3/1/2017 10:25:31 PM Hydrating ECELite with runtime values... - 3/1/2017 10:25:31 PM Migration of ECELite to AD VMS completed succesfully! - 3/1/2017 10:25:33 PM Interface: Interface Mi grate completed. - 3/1/2017 10:25:33 PM Task cl oud\Fabri c\seedRi ngservi ces\ECEseedRi ng Mi grate Task: Task completed. - 3/1/2017 10:25:33 PM PHASE. 3.1 -C FBI) Migrate confi guration to ECE service on seedRing step 241 - step: Status of step '241 - PHASE. 3.1 -(FBI) Migrate configuration to ECE service on seedRing' is 'success ' prepare for future host reboots step 251 - prepare for future host reboots - 3/1/2017 10:25:33 PM Running step 251 - - 3/1/2017 10:25:33 PM Running interface 'startup' of role 'cl . Attempt #1. Task cl - startup Interface: path to module: C: psml - 3/1/2017 10:25:33 PM Interface: Running interface startup psml, poc:startup) - 3/1/2017 10:25:33 PM Deleting onstartup scheduled task. - 3/1/2017 10:25:42 PM - 3/1/2017 10:25:33 PM setting restart callback as: Import—Module c: oudDepI oyment\ECEngi ne\Ente rpri secl oudEngi ne . psdI Invoke-EceAction -Rolepath Cloud -ActionType Startup —verbose -ErrorAction conti nue Invoke-EceAction -Rolepath Cloud -Action Type Startup —verbose -ErrorAction conti nue Invoke-EceAction -Rolepath Cloud -ActionType Startup —verbose -ErrorAction conti nue ' 3/1/2017 10:25:43 PM Reaistering the callback for powershell . exe with argument: '-Executionpolicy Remotesigned -NOEXit -command Import-Module .\CIoudDepIoyment .psdl Import-M0duTe c: oudDepI pers . psmI Import-Module c: oudDepI oyment\ECEngi ne\Enterpri secl oudEngi ne . psdI Invoke-EceAction -Rolepath Cloud -Act non Type Startup —verbose -ErrorAction conti nue Invoke-EceAction -Rolepath Cloud -ActionType Startup —verbose -ErrorAction conti nue Invoke-EceAction -Rolepath Cloud -ActionType Startup —verbose -ErrorAction conti nue ' Registering the scheduled task named ' coldstartMachine under the user ' AzurestackAdmin' . ERBOSE : ERBOSE : COMPLETE : ERBOSE : COMPLETE : ERBOSE : ERBOSE : COMPLETE : - 3/1/2017 10:25:43 PM - 3/1/2017 10:25:43 PM Interface: Interface Startup completed . - 3/1/2017 10:25:43 PM Task cl - startup Task: Task completed. - 3/1/2017 10:25:43 PM prepare for future host reboots step 251 - Step: Status of step '251 — prepare for future host reboots' is 'success ' - 3/1/2017 10:25:43 PM Action: Action plan Action ' Depl oyment ' ' Depl oyment' compl eted . - 3/1/2017 10:25:43 PM ps c: oudDep1 PS c: oudDepI ps c: oudDepI PS c: oudDepI 10:27 PM 3/1/2017

Change the default password expiry to 180 days as per the documentation:

 

Set-ADDefaultDomainPasswordPolicy -MaxPasswordAge 180.00:00:00 -Identity azurestack.local
And that's it! Azure Stack TP3 deployed and ready to rock and roll!

Edit dashboard New Region Management Plans U pd ates Provider Settings Locations Marketplace Subscriptions Resource Explorer Portal settings Virtual machine scale s... Availability sets Storage accounts Images Marketplace Managem... Import/export jobs Snapshots More services > Dashboard V + New dashboard Get started Share Fullscreen Clone Updates Delete Feedback Marketplace Alerts Critical Warning Virtual Machines Provision Windows and Linux virtual machines in minutes App Service Create web and mobile apps for any platform and device SQL Database Managed relational database-as-a-service Storage Durable, highly available and massively scalable storage Resource Providers NAME Key Vault Network Capacity Storage Updates Compute STATUS SQL Region Management f) REGION local CRITICAL WARNING CURRENT VERSION 1.0.170225.2 LAST CHECKED PM STATE U pToDate ALERTS Unknown Unknown Unknown Unknown 10:54 PM 3/1/2017