Azure Kubernetes Service (AKS) with Advanced networking.

The bulk of what you need to know is in Microsofts on-line Docs here.  Its good information so I’ll repeat what’s in it, I will however cover a few gotchas I came across.

First thing to know is that when you create the AKS from the portal, it creates 2 Resource Groups (RG).  One has the AKS servers along with the VNET and the other has all the nodes, scale sets, load balancers once they get created, etc.  The process also creates a Service Principal (Enterprise Application) that is used why you use the kubectl commands and so on.  You can find this by looking in the IAM second of the second RG that is created.  The problem is that it doesn’t have any access by default to the VNET in the main RG that got created.  So when you try to apply a deployment that has a loadbalancer it can’t bind to VNET.  If you run:

az aks browse –resource-group $resGrp –name $aks

to launch the dashboard you’ll see the error.  Go into the IAM section of the VNET and add the Service Principal.  I added it as a Contributor.

Once of the reasons to use Advanced networking is so you can peer AKS with other networks, including one that has a Site-to-Site VPN connection to your on-prem site.  The thing it took me awhile to find in the docs is that you have to create the peering on both the kubernetes VNET and the VNET with the VPN connection.

So this brings us to one more quirk about networking.  The Kubernetes Service IP ranges and the Docker Bridge IPs don’t show up anywhere in Azure.  Squirreled away somewhere in kubernetes land I guess.  But that means the VNET with the VPN connection doesn’t know where those address are, and they are the ones that matter.  At least the  Kubernetes Service IPs do.  (Still working on the routing situation for those)

 

Configuration Manager: Using Collection Variables to automate Operating System Deployment (OSD)

We have multiple computer labs across the campus with a variety of unique software titles required for each location. We wanted to improve the process to re-image a computer in a lab.  One of the improvements we made was to  leverage collection variables. By using the collection variables, we were able to make the re-imaging of a known computer nearly zero touch. The technician only needs to select the task sequence if the computer is known.  If the computer is unknown, the technician will need to select a location.  This also will enable us to explore using required task sequence deployments to lab locations.

We are using a collection variable for each lab called Location.  We use a combination of the building abbreviation and room number for the variable value (e.g. WBOB100).  Each computer lab has a device collection with the Location variable set to the appropriate value.

Collection Properties-Collection Variables tab selected-Variables Location set to WBOB100

We use another collection variable called Unit.  The limiting collection we use for the lab collections is where we set the Unit variable.

Collection Properties-Collection Variables tab selected-Variables Unit set to ATS

In the task sequence, we use the Unit variable to drive an automated naming convention script. If the Unit is equal to a value it runs the script to set the OSDComputerName variable based on the Location collection variable plus the serial number of the device.

Task Sequence step with Options tab selected. Any conditions true. Unit equals ATS.

For new computers, a different script runs if the Unit variable is not equal. This script prompts a drop down menu to select the location. This sets the variables Unit and Location variables based on the selected choice and sets the OSDComputerName.

The location variable is used throughout the task sequence for decision making similar to the naming convention (e.g. binding the devices to specific locations, running a particular child task sequence). We use the options tab to check if the Location variable is equal to the location.

Task Sequence step with Options tab selected. Any conditions true. Location equals WBOB100.

SCCM Client Health PowerBI Report – Version 0.1

Recently we’ve been struggling to get our client health under control. We implemented the script developed by Anders Rodland which has helped us fix large numbers of broken clients in our environment. I didn’t really want to stand up an SSRS server just to report on client health so I took a stab and using PowerBI.

Installation of the report is very easy.  Simply download the PowerBI template here and open it in PowerBI Desktop.  On first running the script you’ll be asked to enter in the SQL server hosting your script data, the database on that server and the clients table.  This should allow you to connect this easily in your environment even if you did some customization to how you’re storing the SQL data.

ClientHealth_Install

The default (first) tab of the report is just a simple dashboard.  I was looking to track all the basics pulled in from the client health script so I could at a glance see where we might have a problem.  Below is a snapshot demonstrating how it breaks down each and shows where you might be encountering problems in your environment.

Additionally I was looking to pull in info about OS version and system architecture just to get a better understanding if any outliers in those areas were having special problems.

ClientHealth_Dashboard

The second page of the report is an overview of what I was looking for in timeline data.  It shows when the script is doing client installs and how many were being installed as well as OS Updates data and Hardware Inventory data.  I added slicers (the sliders below each timeline) so you can zoom in your data to specific time periods if you’d like to see which machines did something in a specific time.  I’ve already found use in this looking for timeline outliers which are pulling my graphs way off track.

ClientHealth_DateSlicing

The third page gives insights into client version including a gauge which can be configured to display how many up to date clients are in your environment.  It also includes breakdowns of last updated and last rebooted as well as operating system breakdowns.

ClientHealth_ClientVersionOSInsights

If there’s any additional features to the report you think would be handy let me know on twitter (@jbolduan) and I’ll work on including them in future releases of the PowerBI template.

Getting Redirected URI’s in Powershell

I recently ran into an issue where I needed the direct resource URI in a Powershell script. This is incredibly useful if you need to parse the actual URI instead of just pulling the resource the redirecting URI is pointing at. In my case I wanted the Firefox URI which points at the executable so I could pull the version out the of URI without having to download and analyze the executable.

You need to first grab the response head from an Invoke-Webrequest:

$request = Invoke-WebRequest -Method Head -Uri $Uri

Next, we need to determine if we’re using Powershell 5 or Powershell Core and pull the Absolute URI out of the request object:

if ($request.BaseResponse.ResponseUri -ne $null) {
    # This is for Powershell 5
    $redirectUri = $request.BaseResponse.ResponseUri.AbsoluteUri
}
elseif ($request.BaseResponse.RequestMessage.RequestUri -ne $null) {
    # This is for Powershell core
    $redirectUri = $request.BaseResponse.RequestMessage.RequestUri.AbsoluteUri
}

Now, sometimes you may get another redirected URI as a response. In these cases you’ll need to determine that and handle it. This is done through error handling by catching and looking for HttpResponseException matching 302 and then running the whole thing again:

if (($_.Exception.GetType() -match "HttpResponseException") -and ($_.Exception -match "302")) {
    $Uri = $_.Exception.Response.Headers.Location.AbsoluteUri
    $retry = $true
}
else {
    throw $_
}

This is a quick and easy way to pull the redirected URI’s from a given URI. Putting it all together we get the function below:

function Get-RedirectedUri {
    <#
    .SYNOPSIS
        Gets the real download URL from the redirection.
    .DESCRIPTION
        Used to get the real URL for downloading a file, this will not work if downloading the file directly.
    .EXAMPLE
        Get-RedirectedURL -URL "https://download.mozilla.org/?product=firefox-latest&os=win&lang=en-US"
    .PARAMETER URL
        URL for the redirected URL to be un-obfuscated
    .NOTES
        Code from: Redone per issue #2896 in core https://github.com/PowerShell/PowerShell/issues/2896
    #>

    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $true)]
        [string]$Uri
    )
    process {
        do {
            try {
                $request = Invoke-WebRequest -Method Head -Uri $Uri
                if ($request.BaseResponse.ResponseUri -ne $null) {
                    # This is for Powershell 5
                    $redirectUri = $request.BaseResponse.ResponseUri.AbsoluteUri
                }
                elseif ($request.BaseResponse.RequestMessage.RequestUri -ne $null) {
                    # This is for Powershell core
                    $redirectUri = $request.BaseResponse.RequestMessage.RequestUri.AbsoluteUri
                }

                $retry = $false
            }
            catch {
                if (($_.Exception.GetType() -match "HttpResponseException") -and ($_.Exception -match "302")) {
                    $Uri = $_.Exception.Response.Headers.Location.AbsoluteUri
                    $retry = $true
                }
                else {
                    throw $_
                }
            }
        } while ($retry)

        $redirectUri
    }
}

Test your Chocolatey Packages in a Windows Container

Containers are a great place to test your Chocolatey packages or even packages from another source.  You don’t have to waste time creating and tearing down vms and perhaps most importantly you can avoid those “It worked on my machine” problems.  By spinning up a clean container every time you will know all your dependencies and can specify them.  There are many approaches and this is just one example.  This article assumes you have a place to run windows containers, docker, and a minimal amount of docker experience.  You can run this on a windows 10 machine, Server 2016 with Hyper-V, or Azure.

The complete code is posted below, but lets break it up a bit and walk through small pieces and point out some of the variations on the source of the package.

In this first version we’ll look at testing a package you’ve created and assume you have the .nupkg file locally.  First, its useful to define a few variable such as the path to the nupkg file.  The $waitTime variable gives chocolatey time to install before trying to fetch the logs.  All the real work is defined in the variable $ps.  $ps contains the code that downloads and installs the the latest chocolatey client.  Next it will use that client to install the chocolatey package you defined the path to in $chocoPack.  Finally it will download and run a script from Microsoft that keeps the container running after the install so you can examine it.  (The nature of a container is to stop running after its ran some piece of code.)  We’ll get to starting the container further down.

$chocoPack = ''
$waitTime=# [int] in seconds as estimate of how long package install takes
$waitUrl='https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/windows-server-container-tools/Wait-Service/Wait-Service.ps1'
$ps="iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'));choco install $chocoPack -y;Invoke-WebRequest -Uri '$waitUrl' -OutFile 'c:\Wait-Service.ps1';c:\Wait-Service.ps1 -ServiceName WinRm -AllowServiceRestart"

In this next section we’ll look at installing a package from the default public package repo.  The only difference here is that $chocoPack will just contain the name of the public package.  This example installs vscode because vscode is awesome.

$chocoPack = 'vscode'
$waitTime=# [int] in seconds as estimate of how long package install takes
$waitUrl='https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/windows-server-container-tools/Wait-Service/Wait-Service.ps1'
$ps="iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'));choco install $chocoPack -y;Invoke-WebRequest -Uri '$waitUrl' -OutFile 'c:\Wait-Service.ps1';c:\Wait-Service.ps1 -ServiceName WinRm -AllowServiceRestart"

And the last example is using your own private chocolatey server and specifies.   In teh choco install section I’ve added the $version variable and ‘-s $privateServerURL’.  -s specifies the ‘Source’, in this case the URL to your server.

$chocoPack = 'my-package'
$version='--version 1.0.0'
$privateServerURL = 'http:///choco/nuget/'
$waitTime=# [int] in seconds as estimate of how long package install takes
$waitUrl='https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/windows-server-container-tools/Wait-Service/Wait-Service.ps1'
$ps="iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'));choco install $chocoPack $version -s $privateServerURL -y;Invoke-WebRequest -Uri '$waitUrl' -OutFile 'c:\Wait-Service.ps1';c:\Wait-Service.ps1 -ServiceName WinRm -AllowServiceRestart"

Now lets look at actually spinning up the container.  The first line is the actual docker command to spin up the container based on windowsservercore image and tells powershell to run the code we defined above.  It also stores the container instance ID in $cid which you will need later.  The it waits to give the package time to install.  The Invoke-command will spit out the log files about the install.  Finally you can use powershell to “remote” into the container and manually look at logs or files, etc.

($cid = docker run -d $dockerArgs microsoft/windowsservercore powershell.exe -executionpolicy bypass $ps )
Start-Sleep -Seconds $waitTime
Invoke-Command -ContainerId $cid -RunAsAdministrator -ScriptBlock{
Get-Content C:\choco\logs\choco.summary.log
#Get-Content C:\choco\logs\chocolatey.log
choco list --local-only ## list install packages
}
Enter-PSSession -ContainerId $cid -RunAsAdministrator

And here it is altogether.

$chocoPack = ''
$waitTime=# [int] in seconds as estimate of how long package install takes
$waitUrl='https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/windows-server-container-tools/Wait-Service/Wait-Service.ps1'
$ps="iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'));choco install $chocoPack -y;Invoke-WebRequest -Uri '$waitUrl' -OutFile 'c:\Wait-Service.ps1';c:\Wait-Service.ps1 -ServiceName WinRm -AllowServiceRestart"
($cid = docker run -d $dockerArgs microsoft/windowsservercore powershell.exe -executionpolicy bypass $ps )
Start-Sleep -Seconds $waitTime
Invoke-Command -ContainerId $cid -RunAsAdministrator -ScriptBlock{
Get-Content C:\choco\logs\choco.summary.log
#Get-Content C:\choco\logs\chocolatey.log
choco list --local-only ## list install packages
}
Enter-PSSession -ContainerId $cid -RunAsAdministrator
Its also worth noting, in these examples I’m using the -d switch in ‘docker run -d’ which tells it to run detached or in the background.  You can use -it to just jump in and watch it as well.  When I first started working with docker there were issues doing this from powersehell.  The latest version of the tools fixed this problem 🙂
UPDATE:  You can not run a full GUI version of windows server in a container so if you are trying to install an application that has dependencies on .dll files not in Server Core this may not work.
Also, since chocolately is just Powershell files to install an application, this basic process can work to install any application that can be installed on Server Core from powershell/comdmandline.

oAuth, tokens, and powerShell

Google, Microsoft, Amazon, Box, Twitter, Trello, Facebook… what do they all have in common? oAuth authentication workflows.

Take your pick of languages to get samples on how to authenticate against all of the endpoints; and you will have to pick and decide between SDKs, NuGet packages, library after library to pull it all together. Sure, these options are great for application developers. But I’m not a developer. I’m a system administrator. An automation engineer. I don’t have interest to load assemblies into core infrastructure that is changing day to day…

Enter – powerShell, and Invoke-WebRequest/-RestMethod. With these two commands as the base, and a bit of ingenuity – you can do all the calls needed to authenticate yourself and start working with a site’s API endpoints. Added benefit of doing it this way? You can use the same code on powerShell 6 on Linux or Windows.

1. Figuring out the authentication flow.

oAuth 2 authentication flows are configured ahead of time by the vendor you are connecting to. Systems will very, but the general flow you will interact with seems to be answered by the following questions. Are you authenticating as a user every time you need to access a service? What about automating server to server work? Do you want to prompt user’s for consent once, assume consent from SSO referred connections, or require consent every single time you request an authentication? oAuth supports just about any combination of this, but isn’t necessarily configured to be consumed. Also, most user based authentication flows support the use of refresh tokens. These special tokens can be used to authenticate in a never ending loop without proving the requester is still valid. The idea is that you already went through the authentication, authorization, and validation process once – no need to do it again since only the authorized account holder would have gotten the refresh token.

Are you authenticating as a service account, or automation system? oAuth 2.0 is also setup to support authentication by signing a request with a private key. Vendors may vary – Google will provide a .p12 file. Box requires you to create your own, and upload the public key. Either way, you use this private key to digitally sign a configured request to get a token, and can be done with no user interaction.

2. The gotchas of doing oAuth tokens

In a user based authentication flow, at some point, you will need to make a request in a web browser. Works great if you are on linux and have access to the selenium-driver, but in a Windows world can get tricky. Invoke-WebRequest gets most of the way, but just not far enough in a complex vendor environments. Basic auth / form auth frequently don’t work well here either. As mentioned previously about refresh tokens though – it is possible to do this web browser process once, gather a refresh token, and then continue on in life for as long as you keep your refresh token uncompromised.

Getting an access token via Json-Web-Token(JWT) request only is more complicated, but is the general process for doing a service to service oAuth request. Google it, and you will get lots of explanations of all the bits and pieces. You’ll also get very few explanations on how to generate one.

3. Code some stuff – go go powerShell

Using the UMN-Google, UMN-Azure, or UMN-Trello repos at https://github.com/umn-microsoft-automation as an example, you will find functions that do the heavy lifting on getting access to various API endpoints.

In any of these cases there is a general flow of process.

  1. Gather who is requesting access to what
  2. Take that information and go to a claims end point to verify authentication. This is generally done in a web browser. These powerShell functions are setup to do an IE popUp to let a user login for verification.
  3. Take the claim received if verified, and go to token endpoint to exchange for a token and possibly a refresh token.

A. function ConvertTo-Base64URL
This is a core component that encodes json data into the needed Base64Url encoded strings. This is needed when using certificates to sign a JWT request.

B. function Get-xxOAuthTokenUser (where xxx = G for google, or Azure)
This function assumes that you have done the work ahead of time to create a google project or Azure application endpoint. Mostly, you just authenticate in a web browser to get an authorization code that is exchanged later for your tokens.

C. function Get-xxOAuthTokenService (where xxx = G for google, or Azure)
This function uses a signed JWT request from a private key (Google) or secret key (Azure)to get an access token. Service to Service flows have the possibility to go directly to the token endpoint with a properly formulated JWT request.

Turing Ping On/Off With An SCCM Configuration Item

Details

The best way I’ve found to disable ping using a configuration item through a
script is using a .Net class. Our fleet is Windows 7+ and Server 2003(ick)+ so
the solution needs to be more robust than using the Server 2012+ built-in
firewall cmdlets.

The scripts are quite simple but we rely heavily on the following core code:

# Get the firewall manager object from .Net
$Firewall = New-Object -ComObject HNetCfg.FwMgr

# Get the domain policy (0)
$Policy = $Firewall.LocalPolicy.GetProfileByType(0)

# Get the settings for ping
$IcmpSettings = $Policy.IcmpSettings

This code gets the firewall manager object out of .Net and then grabs the domain
Firewall policy then the ICMP settings for the domain policy. This is flexible
so the .Net object should exist in older versions of Windows.

Now, there is really only one setting we care about in the ICMP Settings:

$IcmpSettings.AllowInboundEchoRequest

If it’s $true then ping is turned on, if it’s $false then ping is disabled.

Thus, the discovery script is fairly simple, all we have to do is .ToString()
the $IcmpSettings.AllowInboundEchoRequest and if we want to enable/disable
ping all we have to do is add an = $true or = $false to the statement.

Below I’ve put the full scripts I used for our code.

Scripts

Discovery

$Firewall = New-Object -ComObject HNetCfg.FwMgr

$Policy = $Firewall.LocalPolicy.GetProfileByType(0)

$IcmpSettings = $Policy.IcmpSettings

$IcmpSettings.AllowInboundEchoRequest.ToString()

Remediation – Turn Ping On

$Firewall = New-Object -ComObject HNetCfg.FwMgr

$Policy = $Firewall.LocalPolicy.GetProfileByType(0)

$IcmpSettings = $Policy.IcmpSettings

$IcmpSettings.AllowInboundEchoRequest = $true

Remediation – Turn Ping Off

$Firewall = New-Object -ComObject HNetCfg.FwMgr

$Policy = $Firewall.LocalPolicy.GetProfileByType(0)

$IcmpSettings = $Policy.IcmpSettings

$IcmpSettings.AllowInboundEchoRequest = $false