Wim Update Automation – ConfigMgr

Our desktop support folks were looking to keep their Wims up to date for their desktop OSD, and our server folks tagged along for the ride. Of course we went ahead and automated the entire process.

We leverage all of the previous works of OSD Gurus to build a MDT based CTGlobal Image Factory. Those posts are all well covered – go forth and seek out the deploymentBunny (aka:Mikael Nystrom); Kent Agerlund; Ami Arwidmark; Johan Arwidmark, and so on – these are well covered and not repeated here.

The next step is to build even more automation around this. The full workflow is as follows…

  1. Azure Automation Scheduled Task triggers runbook monthly to kick it all off
  2. Clean up any existing test VMs in my test cluster… decommission the old tests
  3. Create multiple local running processes on the ImageFactory server
    1. One each to build the local VM
    2. One each to monitor for completion
  4. The completion monitor kicks off new runbooks via webhooks when complete
  5. Refresh OSImage Package properties + Refresh package source.
  6. Check status then deploy VM (ours are in VMWare) specifying test TS
  7. Last package in the test TS is a powerShell script that calls… another runbook
  8. Unit testing – for lack of a better term of the test VM with shiny new OS
    1. Verify .Net version is latest.
    2. Windows update verification against Microsoft Catalog per OS
    3. powerShell / WMF Version verification
    4. SMBv1 / IIS / WCF Frameworks is disabled checks
    5. Insert check of your own 🙂
  9. If all unit tests pass – refresh Prod OSImage Package properties and source

 

To start with, gather all the MDT Task Sequence Ids from ImageFactory, the Operating Package ID from ConfigMgr, and some sort of unique identifier. I simply picked the OS Level being automated. Along with this… it does assume that you have managed to automate server VM builds in your environment.

## Servers to rebuild
$osList = @("2012","2016","2016-Core")

$oslist | ForEach-Object{
    $os = $_

    if ($os -eq '2012')
    {
        $taskID = 'Server2012'
        $osID = 'ABC0000E'
        $deployOS = '2012R2'
    }
    elseif ($os -eq '2016')
    {
        $taskID = 'Server2016'
        $osID = 'ABC00012'
        $deployOS = '2016'
    }
    elseif ($os -eq '2016-Core')
    {
        $taskID = 'Server2016-Core'
        $osID = 'ABC00010'
        $deployOS = '2016-Core'
    }

C:\Automation\ServerDecom.ps1 -computer $computer -rebuild $True

Quick note on the decommissioning part. My automation will flag for a rebuild system. What it does is instead of deleting the AD Computer object – it will instead reset the password. This is very important later on when the newly deployed test VM has a new IP address, and needs to dynamically update AD DNS. If you delete the computer object… say good bye to your computer’s access to update any existing DNS Records.

$sam = $computer+'$'
$password = ConvertTo-SecureString -String $sam -AsPlainText -Force
Get-ADComputer $sam -Credential $adCreds| Set-ADAccountPassword -NewPassword:$password -Reset:$true -Credential $adCreds

Moving on – the runbook still needs to kick off the MDT build. This takes a little prep work to have some local powerShell scripts saved on the Image Factory Server. The reason for doing this all is to keep from having a runbook run for excessive amounts of time waiting for the MDT build to complete.

Invoke-Command -ComputerName 'ImageFactory Server' -Credential $adCreds -ScriptBlock {
param($deployOS)
$process = "powershell.exe -file c:\scripts\$deployOS`.ps1"
$process2 = "powershell.exe -file c:\scripts\Monitor-$deployOS`.ps1"
([WMICLASS]"\\server\ROOT\CIMV2:win32_process").Create($process)
([WMICLASS]"\\server\ROOT\CIMV2:win32_process").Create($process2)
} -ArgumentList $deployOS

The reason this was done with process creation is that the remote powerShell session will end all processes on exit. However, if you kick off new process inside the remote session – they stay running as system.

The two scripts that do the local heavy lifting…
Build VM

Remove-Module CTImageFactory -ErrorAction SilentlyContinue
Import-Module "e:\ImgFactory\Scripts\CTImageFactory.psm1" -WarningAction SilentlyContinue
set-location "e:\ImgFactory\Scripts\"
Get-GlobalVariables
Start-Build -BuildType 'Single' -TaskSequenceID 'Server2016'

Monitor VM

do
{
Start-Sleep -Seconds 1200
$state = (Get-VM |where name -eq 'Server2016').State
}
until ($state -eq 'Off')

Invoke-RestMethod -Method Post -Uri 'https://s1events.azure-automation.net/webhooks?token=webhookToken' -Body (ConvertTo-Json -InputObject @{'osID'='ABC00010';'computer'='Test Server Name';'deployOS'='2016-Core'}) -ErrorAction Stop

The last bit calls the next runbook via webhook in the gravy train. This is where it needs to check in with the site server, and make sure the newly captured wim files are updated and distributed. Then move on to deploy the wim as a test for verification.

$siteCode = Get-AutomationVariable -Name 'ConfigMgr-SiteCode'
$siteServer = Get-AutomationVariable -Name 'ConfigMgr-SiteServer'

################### Update wim file
$imageProperties = Get-WmiObject -ComputerName $siteServer  -Namespace "root/SMS/site_$sitecode" -Class 'SMS_ImagePackage' -Credential $sccmServiceAccount |where-object -property PackageID -eq $osID
$imageProperties.ReloadImageProperties()
$imageProperties.RefreshPkgSource()

Import-Module "C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1" -Force
$drive = get-psdrive -name MPT -ErrorAction SilentlyContinue
If(!$drive){New-psdrive -Name $siteCode -PSProvider "AdminUI.PS.Provider\CMSite" -root $siteServer -Credential $sccmServiceAccount  -Description "SCCM Site"}
$location = $siteCode+":"
push-location
Set-location $location
$date = get-date -format d
Set-CMOperatingSystemImage -Id $osID -Description $date -version "I'm NEW and Shiny'"
Pop-Location
Remove-Psdrive MPT

# Check on status of distribution
$count = 0
do
{
$count
Start-Sleep -Seconds 120
$status = Get-WmiObject -Credential $sccmServiceAccount -Namespace root\sms\site_$siteCode -Query "SELECT PackageID,PackageType,State,ServerNALPath,LastCopied FROM SMS_PackageStatusDistPointsSummarizer where PackageID = '$($osID)'" -ComputerName $siteServer | select-object PackageID,PackageType,State,LastCopied,ServerNALPath
if(($status.State | Where-Object {$_ -eq 0}).count -gt 1){$complete = $true}
$count++
}until($complete -or $count -eq 10)
if(-not($complete)){throw "SCCM Package failed to update in time"}    

Phew! All that to update content, distribution, some descriptions, and check that its all where it needs to be. As long as it all completes – then I call our internal automated server build scripts. We use boot media attached to the VM to automatically call the appropriate task sequence. In addition, we set OSD TS Variables in a text file accessible from the OSD environment to quickly set all needed variables. No waiting for device membership collections and so forth.

The last step of the task sequence is to call.. you guessed it – another runbook via a webhook. This last step is all about verifying the final product. Does it have the latest updates? Did the MDT TS do everything I needed it to do? Lets find out. Also, this is where having clean AD DNS really comes into play.

$session = New-PSSession -ComputerName $computer -Credential $creds
$dotNetTest = Get-AutomationVariable -Name 'dotnetTest'

# Test for dotNet version
$dotNetValidation = Invoke-Command -Session $session -ScriptBlock {
param($dotNetTest)
$dotNetVersions = (Get-ChildItem 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP' -recurse | Get-ItemProperty -name Release -EA 0 |Where-Object { $_.PSChildName -eq 'Full'} |Select-Object Release).Release
If ($dotNetVersions -ge $dotNetTest) {$dotNetValidation = 'Pass'}
Else {$dotNetValidation = 'Failed'}
$dotNetValidation
} -ArgumentList $dotNetTest

Test for Microsoft Update CU

$updateValidation = Invoke-Command -Session $session -ScriptBlock {
$date = get-date -Format yyyy-MM
$updateTest = "$date Cumulative Update" # Needs to match $date + Cumulative Update *
$2012updateTest1 = "$Date Preview of Monthly Quality Rollup for Windows Server 2012 R2"
$2012updateTest2 = "$Date Security Monthly Quality Rollup for Windows Server 2012 R2"
$catalogURI = "https://www.catalog.update.microsoft.com/Search.aspx?q="
$hotfixList = Get-HotFix |Sort-Object -Property hotfixid
$list = $hotfixlist[($hotfixList.count -3)..($hotfixList.count)].HotfixID
$updateValidation = 'Failed'
$list |ForEach-Object {
    $KB = $_
    $uri = $catalogURI + $kb
    try {
        # have to usebaseicparsing and the raw content as IE has not run yet
        $site = Invoke-WebRequest -Uri $uri -UseBasicParsing
        $content = $site.RawContent
        If ($content -like "*$updateTest*") {$updateValidation = 'Pass'}
        ElseIf ($content -like "*$2012updateTest1*") {$updateValidation = 'Pass'}
        ElseIf ($content -like "*$2012updateTest2*") {$updateValidation = 'Pass'}
        }
    Catch {Write-Warning "Failed to lookup $KB"}
    }
}
$updateValidation
#endregion

Getting into the rest of the validations — enter your own as needed

$powerShellValidation = Invoke-Command -Session $session -ScriptBlock {
    $powerShellVersion = '5.1'
    [string]$major = $PSVersionTable.psversion.major 
    [string]$minor = $PSVersionTable.psversion.minor
    $powerShellTest = $major + ".$minor"

    If ($powerShellTest -eq $powerShellVersion) {$powerShellValidation = 'Pass'}
    Else {$powerShellValidation = 'Failed'}
    $powerShellValidation
    }

$features = Invoke-Command -Session $session -ScriptBlock {Get-WindowsFeature | Where-Object -Property 'InstallState' -eq 'Installed'}
    $smbTest = $features | Where-Object -Property Name -eq 'FS-SMB1'
    If ($smbTest) {$smbValidation = 'Failed'}
    Else {$smbValidation = 'Pass'}

    $iisTest = $features | Where-Object -Property Name -like 'web*'
    If ($iisTest) {$iisValidation = 'Failed'}
    Else {$iisValidation = 'Pass'}

    $wcfTest = $features | Where-Object -Property Name -like 'NET-WCF-*'
    If ($wcfTest) {$wcfValidation = 'Failed'}
    Else {$wcfValidation = 'Pass'}

$testing = @{
netvalidation = $dotNetValidation;
updatevalidation = $updateValidation;
powershellvalidation = $powerShellValidation;
smbvalidation = $smbValidation;
iisvalidation = $iisValidation;
wcfvalidation = $wcfValidation
}

$testing | ForEach-Object {
    If ($_.values -ne 'Pass') {$nextStep = 'Email'}
    Else {$nextStep = 'UpdateProd'}
}

At this point, if anything did not pass… it simply emails our ticketing queue to notify what did not pass so we can go fix the MDT TS or what ever the case may be. Otherwise, it moves on and updates the production OS package the same as the test.

Woah… kind of a lot.

So you want to automate against ConfigMgr… do you?

Automation is great. It really is. I’m using Azure Automation Hybrid Runbook workers for just about everything these days which is a post on its own, but wanted to touch base on some key interactions with ConfigMgr.

First, the premise. The automation servers are all running server 2016 core. So… no ConfigMgr console. No first popup of the console to set site assignments with powerShell. Actually, no real good way to get some default goodness with ConfigMgr at all. For the remainder of the post – the examples will all be dealing with my automation around updating OS Image Files.

So clearly I just copy over the powerShell module files with the .dlls in order to work better with the site server remotely. The simple route is to copy over the bin path C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin — to the same location on your core server. This will at least get you access to the powerShell modules.

Simple… but this is really messy in an automated world. Important here is making a new-psdrive and providing your service account credentials for the task at hand. Many of the ConfigMgr cmdlets do not allow for a -credential switch to provide access in line.

Import-Module "C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1" -Force
$drive = get-psdrive -name $siteCode -ErrorAction SilentlyContinue
If(!$drive){New-psdrive -Name $siteCode -PSProvider "AdminUI.PS.Provider\CMSite" -root $siteServer -Credential $sccmServiceAccount  -Description "SCCM Site"}
$location = $siteCode+":"
push-location
Set-location $location
$date = get-date -format d
Set-CMOperatingSystemImage -Id $osID -Description $date -version "I'm a NEW and Shiny'"
Pop-Location
Remove-Psdrive $siteCode

Alternatively, you could certainly create a powerShell session with credentials; and then invoke a script block to the session. Sure – this works, but it opens up requiring the allowance of powerShell remoting for your service account. Just something to consider if you’re comfortable putting all your automation in script blocks.

Now… into the rabbit hole. I don’t like having to copy about files from a console. What happens if something changes in the ConfigMgr site version that updates the powerShell cmdlets… and now which servers need this copied where? There are many other reasons to keep this stuff off my core infrastructure, but honestly – I just don’t want to have to think about it. Keep it simple. Keep it to powerShell.

So some other good options. Obviously, Cim/Wmi work well since so much of ConfigMgr is accessible via this channel. They are also going to work with providing credentials! So letting a service account handle a well scoped task isn’t that big of a hurdle. What the hurdle is, is that not all Wmi objects will have the appropriate methods to write data back to your site server.

# Reload the image properties before refreshing the package on the distribution points
$imageProperties = Get-WmiObject -ComputerName $siteServer  -Namespace "root/SMS/site_$sitecode" -Class 'SMS_ImagePackage' -Credential $sccmServiceAccount |where-object -property PackageID -eq $osID
$imageProperties.ReloadImageProperties()
$imageProperties.RefreshPkgSource()

While this was great to get the first task done of updating the files in the system… there was no way to update the description or version via the Wmi object. When you pipe through to get-member – the properties exist, but there is no put() method.

Option five. The bottom of the rabbit hole I went down is to connect directly to the SMS Provider using WMI. This requires creating a SWbemServices object. Say what? No need to explain.. Microsoft Docs Page thank you docsMsft!

I rewrote the VB sample in powerShell a while back, and did a pull request. So side note – if you see something, write something, pull request back!

 
$siteCode = ''
$siteServer = 'server.domain'

$credentials = Get-Credential
$username = $credentials.UserName

# The connector does not understand a PSCredential. The following command will pull your PSCredential password into a string.
$password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($credentials.Password))

$NameSpace = "root\sms\site_$siteCode"
$SWbemLocator = New-Object -ComObject "WbemScripting.SWbemLocator"
$SWbemLocator.Security_.AuthenticationLevel = 6
$connection = $SWbemLocator.ConnectServer($siteServer,$Namespace,$username,$password)

 

With a connector in place, you can do something like below to update description/version and put()it back

$wim = $connection.Get("SMS_ImagePackage.PackageID='$osID'")
$wim.Properties_.Item("Version").value = $version
$wim.Properties_.Item("Description").value = $Description
$wim.Put_()

Here is the short of it. I can still provide credentials to run as what ever service principal I need, and then I get the benefit of being able to do.. well anything I want since its a direct connection against the SMS Provider itself.

There you have it folks. A full repertoire of how to get things done when coding in an automated way against ConfigMgr. Pick you poison wisely, or just mix and match as needed.
 

Strong Cryptography – .NET + powerShell

I spend a lot of time using Invoke cmdlets in powerShell. Over the past year there has been a need to address how the Invoke-RestMethod and Invoke-WebRequest handle SSL/TLS connections as service providers and API endpoints drop older versions of SSL and TLS. The good news is that providers are finally dropping these insecure channels, and the bad news is that Microsoft applications still default to allowing them as a client.

A few things for consideration. Disabling TLS 1.0 and all of SSL is rather simple – a few Registry Keys and a reboot – and done.  See the end of the post for the SCHANNEL Reg Keys.

It came as a surprise to me then when the Security Providers Keys are set, and then I started to get connection failures with a ‘Underlying Connection has Failed’ — very common wording for a SSL/TLS handshake failure. Whats going on powerShell? Why are you failing me?

Turns out… its because of .NET. If you pop open a powerShell session you can run

[Net.ServicePointManager]::SecurityProtocol

   — you will likely get the following output. “Ssl, Tls”

Not good powerShell. I said Ssl is disabled, and why wouldn’t you handshake to something better? Turns out powerShell isn’t as smart as you would think. You have a couple of options.

First — in the worst case if you still need to use Ssl… please don’t… but if you must, you can set a powerShell session to use TLS only by setting 

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 

But please don’t do this as a work around in your scripting. Please get rid of what ever other Ssl requirements are preventing a global system change.

Second — you can tell your OS to set the .Net Framework to use Strong Cryptography! You would think you only need to do this for Wow6432Node as who runs powerShell as a 32 bit process… but even if you check your environment, and are running as a 64 bit process it will still read the 32 bit version of .Net framework settings. Set the following, reboot, and your .Net calls from powerShell for the ServicePointManager will default to Tls, Tls11, Tls12.

Set-ItemProperty -Path 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord

Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord

 

 No more connections issues!

 

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client]
“DisabledByDefault”=dword:00000001
“enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server]
“Enabled”=hex(b):00,00,00,00,00,00,00,00
“DisabledByDefault”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client]
“DisabledByDefault”=dword:00000001
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server]
“Enabled”=hex(b):00,00,00,00,00,00,00,00
“DisabledByDefault”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client]
“DisabledByDefault”=dword:00000001
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server]
“DisabledByDefault”=dword:00000001
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client]
“DisabledByDefault”=dword:00000000
“Enabled”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server]
“DisabledByDefault”=dword:00000000
“Enabled”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]
“Enabled”=dword:00000001
“DisabledByDefault”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]
“DisabledByDefault”=dword:00000000
“Enabled”=dword:00000001

Microsoft (Azure) APIs

Wanted to share a finer thought of getting your head around all the wonderful, and some times frighteningly complicated Microsoft APIs. Its not so bad~

Recently I did a user group talk on this topic, and was shocked by the number of hands that were raised when I asked how many developers were in the room. An Azure user group, and almost every single person raised their hand. So I pushed them… how many have written their own API? Again, almost all hands were in the air. By the end though, one of them said – this is too much info! This is a quick break down of what I hoped they obtained out of the talk. In short… read the manual.

  1. That in the case of APIs, good documentation will make your day. Microsoft does an amazing job of documenting. Let that sink in. Yes. Yep. OK, go check it out Microsoft Docs.
  2. I wasn’t all surprised that with this many developers at an Azure user group that a lot of .Net(core)/C# developers were in attendance. If a particular language is what you know… go read the manual again on the docs page. The point being that Microsoft does amazing work documenting the REST API Reference for many languages, and supports many different client libraries. Such as:
  • .NET
  • Java
  • Node.js
  • Python
  • Azure CLI

 

In the end, it is all the same basic components/concepts no matter the language you develop in. Build and make a URI call with an action to an API endpoint, and include with it a header and some body data for the thing you are trying to do. For the third time (and done) go read the manual on your API. It will tell you everything you need to know.

oAuth, tokens, and powerShell

Google, Microsoft, Amazon, Box, Twitter, Trello, Facebook… what do they all have in common? oAuth authentication workflows.

Take your pick of languages to get samples on how to authenticate against all of the endpoints; and you will have to pick and decide between SDKs, NuGet packages, library after library to pull it all together. Sure, these options are great for application developers. But I’m not a developer. I’m a system administrator. An automation engineer. I don’t have interest to load assemblies into core infrastructure that is changing day to day…

Enter – powerShell, and Invoke-WebRequest/-RestMethod. With these two commands as the base, and a bit of ingenuity – you can do all the calls needed to authenticate yourself and start working with a site’s API endpoints. Added benefit of doing it this way? You can use the same code on powerShell 6 on Linux or Windows.

1. Figuring out the authentication flow.

oAuth 2 authentication flows are configured ahead of time by the vendor you are connecting to. Systems will very, but the general flow you will interact with seems to be answered by the following questions. Are you authenticating as a user every time you need to access a service? What about automating server to server work? Do you want to prompt user’s for consent once, assume consent from SSO referred connections, or require consent every single time you request an authentication? oAuth supports just about any combination of this, but isn’t necessarily configured to be consumed. Also, most user based authentication flows support the use of refresh tokens. These special tokens can be used to authenticate in a never ending loop without proving the requester is still valid. The idea is that you already went through the authentication, authorization, and validation process once – no need to do it again since only the authorized account holder would have gotten the refresh token.

Are you authenticating as a service account, or automation system? oAuth 2.0 is also setup to support authentication by signing a request with a private key. Vendors may vary – Google will provide a .p12 file. Box requires you to create your own, and upload the public key. Either way, you use this private key to digitally sign a configured request to get a token, and can be done with no user interaction.

2. The gotchas of doing oAuth tokens

In a user based authentication flow, at some point, you will need to make a request in a web browser. Works great if you are on linux and have access to the selenium-driver, but in a Windows world can get tricky. Invoke-WebRequest gets most of the way, but just not far enough in a complex vendor environments. Basic auth / form auth frequently don’t work well here either. As mentioned previously about refresh tokens though – it is possible to do this web browser process once, gather a refresh token, and then continue on in life for as long as you keep your refresh token uncompromised.

Getting an access token via Json-Web-Token(JWT) request only is more complicated, but is the general process for doing a service to service oAuth request. Google it, and you will get lots of explanations of all the bits and pieces. You’ll also get very few explanations on how to generate one.

3. Code some stuff – go go powerShell

Using the UMN-Google, UMN-Azure, or UMN-Trello repos at https://github.com/umn-microsoft-automation as an example, you will find functions that do the heavy lifting on getting access to various API endpoints.

In any of these cases there is a general flow of process.

  1. Gather who is requesting access to what
  2. Take that information and go to a claims end point to verify authentication. This is generally done in a web browser. These powerShell functions are setup to do an IE popUp to let a user login for verification.
  3. Take the claim received if verified, and go to token endpoint to exchange for a token and possibly a refresh token.

A. function ConvertTo-Base64URL
This is a core component that encodes json data into the needed Base64Url encoded strings. This is needed when using certificates to sign a JWT request.

B. function Get-xxOAuthTokenUser (where xxx = G for google, or Azure)
This function assumes that you have done the work ahead of time to create a google project or Azure application endpoint. Mostly, you just authenticate in a web browser to get an authorization code that is exchanged later for your tokens.

C. function Get-xxOAuthTokenService (where xxx = G for google, or Azure)
This function uses a signed JWT request from a private key (Google) or secret key (Azure)to get an access token. Service to Service flows have the possibility to go directly to the token endpoint with a properly formulated JWT request.

Emergency! We need that patched!

Infrastructure is never a perfect world when it comes to Microsoft patch management. Is your WSUS service healthy? Is it integrated to SCCM as a software update point? Are you free of errors, but still not sure? Did that security patch really go out?

Your security monitoring, SCOM, or log analytics tool might tell you otherwise; and what are you left to do?

Do you believe the SCCM deployment reports, or your monitoring that says a critical patch is missing?

No matter the case of what, or why – sometimes there comes a point where you just need to brute force install a patch. Also, you need it, like, NOW.

You’ve got options if you’re prepared with DSC, but this is production, and we need them patched now – reboot later! Business requirements… alas.

Hopefully you can pull a dynamic list of systems from SCOM, SCCM, VMWare, Hyper-V, AD, or where ever… and just pipe it through. Obviously, if you need to reboot now as well, that is easy enough to modify from this little one-off.

Prep:
1. Get a list of systems you need to apply the patch to.
2. Extract the .cab of the KB you download from Microsoft.
3. Assumes you have remote powerShell / Admin access.

$listOfComputers| foreach {
$session = new-pssession $_
copy 'path to patchKB.cab' 'path to patchKB.cab' -tosession $session

Invoke-Command -session $session -scriptBlock {
## Check if KB is already installed
$KB = get-hotfix |where{$_.hotfixid -eq 'KB#####'}
if (!$KB) {

## Just in case you need to verify the OS version ##
$os = (Get-WmiObject -class Win32_OperatingSystem).version

## dism... I know -- allows for remote execution using no new processes, or EULA to the KB ##
dism.exe /online /add-package /PackagePath:c:\windows\temp\patchKB.cab /norestart
}

Else {write-host 'KB previously installed'}}}