Skip to main content

What is around the corner for IT-Pros – Part 2

This is part 2 of a multi post around topics discussed in Part1. Next up is Hybrid Cloud and how it will impact us and why it is the way of the future.


Hybrid cloud

Microsoft changed their focus a couple of years ago, from the primary solution provider for enterprise software running in datacenters to a Mobile/Cloud first strategy. Their bet was that the cloud would be more important and that the economic growth would be created in the emerging cloud market. Needless to say that it looks like their bet is paying of tenfold.



Azure

Over the last year there have been a significant number of announcements that connect Azure to your datacenter, thus enabling enterprise to utilize cloud resources with enterprise On-Prem datasources or local hybrid runbook workers. It should not come as any surprise that this trend is accelerating in preparation for the next big shift in hybrid technology, that is expected to be released with Windows Server 2016 – Microsoft Azure Stack. Before we go down that rabbit hole, a short history summary of the hybrid cloud efforts by Microsoft.


Azure Pack – The ugly

imageIn my honest opinion, this was a disaster. It was Microsoft first attempt on building a hybrid cloud using workarounds (Service Provider Foundation anyone) to enable a multi tenant aware solution. It relied heavily on System Center technology and was a beast to configure, setup and troubleshoot. Although there was/is integration with Azure, it relies on the old API which is Azure Service Manager and it is not an consistent experience. Some times the early adopters pay the ultimate price, and this was one of them. Currently there is not announced any upgrade path from Azure Pack to the new Azure Stack solution, and I doubt there will ever be one.

That being said, it works and provides added value for the enterprise. On the downside it does not scale like Azure and requires expert knowledge to be managed. My advice if you are considering a Private or Hybrid Cloud, wait until Windows Server 2016 is released and have a look at Azure Stack instead.



CPS (Cloud Platform System) – The bad

imageThis is Microsoft first all-in-a-box-cloud solution powered by Dell hardware. The entire system runs and scale very nicely. When you want more capacity, buy a new all-in-one-box and hock it up to the first box. It was built upon the first attempt at creating a Private Cloud in a “box” running Windows Azure Pack. CPS initial configuration is done by a massive single powershell script and was planned and released before the new Azure Resource Manager (ARM) technology hit the ground running.

Why is it bad? Well because in it’s current release, it is powered by Azure Pack and it fits in nicely with the Clint Eastwood analogy I lined up. I would be very surprised if it is not bundled with Azure Stack when that is released later this year or early next year. Time will show.

Just if you were wondering. The price-tag for this solution with hardware, software and software assurance would run you something like in the region of $2.5 million. That is for the first box. You may get a discount if you buy several boxes at the same time.



MAS (Microsoft Azure Stack) – The good

Fast forward to Microsoft Igninte 2015 and MAS was announced. It is currently in limited preview (the same as for Windows 2016 preview program) and is expected to be released to the market when Windows Server 2016 reach RTM.

MAS is the software defined datacenter you can install in your datacenter and create your own private cloud. It is identical to Azure, behaves like Azure in any respect and it can run in your datacenter giving you the consistent experience across boundaries. Think about that for a minute and reflect on how this will change your world going forward.

A true Hybrid Cloud will manage and scale your resources using technology built and enabled by the cloud. Resource templates (JSON ARM templates) you create to build services in MAS, can with the flip of a switch be deployed to Azure instead and the other way around.


MAS – Overview

image
This is a image I borrowed from a presentation held by Jeffry Snover during the Powershell Summin held in Stockholm this year (full video here). It does not rely on any System Center components and is built to be a true multi tenant solution. There will be providers that will support the different System Center products, which is probably a good idea.

The MAS structure is strikingly similar to something we all know very well. It contains the conceptual building blocks of an operating system or a server if you like.


MAS - Hardware and Abstraction layer

The hardware layer explains it self. It is the common components that a server is build of like CPU, storage, network and other components. Above this we have the Abstraction layer that consists of Plug-n-Play and a drivers stack. This layer is there to assist you when you “plug in” new hardware in your datacenter or add more storage etc. This is also the layer the MAS kernel communicates with.

Big progress have been made into creating a Datacenter Abstraction Layer (DAL or what is otherwise known as Hardware Abstraction Layer (HAL) on Windows) that conforms into a standard that hardware vendors implement. These are


  • System Management Architecture for Server Hardware (SMASH)
  • Common Information Model (CIM, or WIM or earlier versions of windows)
  • Storage Management Initiative (SMI-S)


image


The main goal of DAL is to create a unified management of hardware resources. Microsoft have created an open source standard for this and it is called Open Management Infrastructure (OMI). OMI has been adopted and implemented by Cisco, Arista, HP, Huawei, IBM and different Linux distros. This is why you can run Linux in Azure and MAS can talk to and configure hardware resources like network, storage and other devices for you.

Now for Server and Rack Management there will be something called RedFish which implement a OData endpoint that support paging, server-side filtering and have request header support. There will be Powreshell cmdlets you can use to interact with RedFish, however at this time it is uncertain if it will be ready by Windows Server 2016 RTM.


MAS - Initial System Load

The process of the initial setup of MAS is entirely done and enforced by Desired State Configuration (DSC), not Powershell like you might expect. This has a number of implied consequences you might want to reflect on;

  1. If DSC is used in MAS, is Azure also under the hood using “DSC”?
  2. If DSC is used in MAS, would it be fair to say that Microsoft has made a deep commitment into DSC?

The answer to no 1 is; "I do not know, yet". For no 2, it is a big fat YES.

The Azure Resource Manager (ARM) in Azure and MAS bears a striking resemblance to Desired State Configuration:


  • They are both idempotent
  • Both use resource or resource providers
  • They both run in parallel 
  • They are both declarative
  • ARM uses JSON and DSC uses MOF/textfiles
  • A DSC configuration or a JSON template file can be re-applied several times and only missing elements or new configuration is applied to the underlying system.


MAS Kernel

You only want secure and trustworthy software to be running here. II is the heart and soul of MAS and it is protected and run by Microsoft new Cloud OS – Nano server. Nano server is the new scaled down Windows 2016 server that is build for the cloud. In footprint it is more that 20 times smaller than server Core and boots in less than 6 seconds.

There has been a number of security enhancements that directly apply to the MAS kernel:


  • Enhanced security logging – Every Powershell command is logged, no exceptions
  • Protected event logging – You can now encrypt your event log with a public key and forward them to a server holding the matching private key that can decrypt the log.
  • Assume breached – This implies that there has been a mindset change in Microsoft. They now assume that the server will be breached and the security measures/plan is implemented accordingly.
  • Just Enough Admin (xJea) – JEA is about locking down your infrastructure with the JEA toolkit and thus limiting the exposure of privileged access to the core infrastructure/systems. It now also supports a big red panic button for those cases that require emergency access to the core to solve a critical problem that otherwise would have to be approved through appropriate channels.


To show developers that Microsoft is serious about Powershell, they have made some changes to Visual studio to increase the support for Powershell and included some nice tools for you:


  • Static Powershell code analysis with script analyzer
  • Unit testing for Powershell with Pester (see Part1)
  • Support for Classes in Powershell like in C-sharp


MAS - User space

This is where the tenant Portal, Gallery and Resource providers live. Yes, MAS will have a gallery for your services that your tenants can consume. This is where the DevOps lifestyle come into play. Like we talked about in Part1.

In addition Microsoft has proved it cherishes Linux with the announcement that they will implement Open SSH in windows. Furthermore they have started to port DSC to Linux and spinning of their OMI commitment in the open source community.


Shadow IT

Everybody have a shadow IT problem. People that say they do not, just does not realize it yet. It has become so easy to consume cloud resources that solve line of business problems, that IT can’t or is unable to solve in a timely manner. It could be any number for reasons for this, commonly it is related to legacy requirement, budget restraints or pure resistance towards any change not initiated by IT themselves.

One of the goals in implementing a hybrid/private cloud should be to use the technology to re-enable IT as a strategic tool for management that creates completive advantages that drive economic growth. In my opinion Executive Management has for to long regarded IT as a cost center and not as an instrument they can use to achieve business goals, strategic advancement and financial progression.



Missing automation link

1,5 year ago I wrote a 2 part blog (Part1 and Part2) about the missing automation link. Basically it was a rant where I could not understand why DSC was not used more to enable the Hybrid Cloud. Windows Azure Pack just did not feel right, and it turns out I was right. Well now we have the answer and it is Microsoft Azure Stack. It runs Microsoft Azure and it will perhaps one day run in your datacenter too.



Will the pure datacenter survive?

For the time being, I think they will, however they will we greatly outnumbered by the number of hybrid clouds running in conjunction with the cloud and not in spite of the cloud. Currently we are closing in on a Kodak moment. It does not matter if your datacenter is perfect in the eyes of who ever is in charge. If it does not solve the LOB problems in your organization, the cloud will win if it provides the right solution at the right time.



Why should you implement a Hybrid Cloud?

Question is more like, why not? I know it is a bit arrogant, however Microsoft has made a serious commitment into a consistent experience whether you are spinning up resources in the Cloud or in your private Hybrid Cloud. Why would you not be prepared to utilize the elasticity and scalability of the cloud? With the Hybrid Cloud you get the best from both worlds in addition to most of the innovation Microsoft does in the Cloud.

As Azure merges closer and closer with On-Prem datacenters, it should become obvious that not implementing a hybrid cloud will be the wrong way. Even if Azure will merge nicely with On-Prem it will not compare to the integration between Azure and MAS.

Two more important things that will accelerate the shift in IT. Containers/Azure Container Service and the new cloud operating system Nano server will change the world due to their portability light weight. For the first time I see opportunities for a Cloud Broker that trades computing power in an open market. Computing power or capacity will become a commodity like pork belly on the stock exchange.

How do you manage Nano server and Containers? Glad you asked, with powershell of course. Do you still think that powershell is an optional skill going forward?

In part 3 we will talk in more depth about the game changers; Nano server and Containers. 


Cheers

Tore

Comments

Popular posts from this blog

Serialize data with PowerShell

Currently I am working on a big new module. In this module, I need to persist data to disk and reprocess them at some point even if the module/PowerShell session was closed. I needed to serialize objects and save them to disk. It needed to be very efficient to be able to support a high volume of objects. Hence I decided to turn this serializer into a module called HashData. Other Serializing methods In PowerShell we have several possibilities to serialize objects. There are two cmdlets you can use which are built in: Export-CliXml ConvertTo-JSON Both are excellent options if you do not care about the size of the file. In my case I needed something lean and mean in terms of the size on disk for the serialized object. Lets do some tests to compare the different types: (Hashdata.Object.ps1) You might be curious why I do not use the Export-CliXML cmdlet and just use the [System.Management.Automation.PSSerializer]::Serialize static method. The static method will generate t

Toying with audio in powershell

Controlling mute/unmute and the volume on you computer with powershell. Add-Type -TypeDefinition @' using System.Runtime.InteropServices; [Guid("5CDF2C82-841E-4546-9722-0CF74078229A"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] interface IAudioEndpointVolume { // f(), g(), ... are unused COM method slots. Define these if you care int f(); int g(); int h(); int i(); int SetMasterVolumeLevelScalar(float fLevel, System.Guid pguidEventContext); int j(); int GetMasterVolumeLevelScalar(out float pfLevel); int k(); int l(); int m(); int n(); int SetMute([MarshalAs(UnmanagedType.Bool)] bool bMute, System.Guid pguidEventContext); int GetMute(out bool pbMute); } [Guid("D666063F-1587-4E43-81F1-B948E807363F"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] interface IMMDevice { int Activate(ref System.Guid id, int clsCtx, int activationParams, out IAudioEndpointVolume aev); } [Guid("A95664D2-9614-4F35-A746-DE8DB63617E6"), Inte

Creating Menus in Powershell

I have created another Powershell module. This time it is about Console Menus you can use to ease the usage for members of your oranization. It is available on GitHub and published to the PowershellGallery . It is called cliMenu. Puppies This is a Controller module. It uses Write-Host to create a Menu in the console. Some of you may recall that using Write-Host is bad practice. Controller scripts and modules are the exception to this rule. In addition with WMF5 Write-Host writes to the Information stream in Powershell, so it really does not matter anymore. Design goal I have seen to many crappy menus that is a mixture of controller script and business logic. It is in essence a wild west out there, hence my ultimate goal is to create something that makes it as easy as possible to create a menu and change the way it looks. Make it easy to build Menus and change them Make it as "declarative" as possible Menus The module supports multiple Men