Syntax checking your Powershell code with Pester

If you just want to syntax check your Powershell code with Pester, scroll to the bottom and grab my describe block. If you’d like you’d like to go on a little journey with me, keep reading.

I’m in the process of getting as much of team’s Powershell code through a CI pipeline using Azure DevOps. Due to some issues with accidentally pushing code with syntax and encoding errors to production, one of the first things I wanted to do was just simply validate that the code was valid. I found a post on the PowerShell.org forums from 2014 that inspired me to write this test:

Continue reading “Syntax checking your Powershell code with Pester”

Finding a Specific Setting in your Client Settings

With ConfigMgr 1806, the Application Catalog service and site roles are no longer needed, so for me it was a good opportunity to have 2 less servers to care and feed for. But because I was a dummy, in Default Client Settings I had selected my app catalog site explicitly rather than letting the client automatically pick (I have no idea why I did this). And when you make a new client setting, it inherits the settings of the default, so anyone who made a custom policy with Computer Agent had that setting. This setting being defined prevents the removal of the application catalog roles.

I have 41 Client Settings, I’m not about to look through each of them for Computer Agent settings. So Powershell to the rescue. Get-CMClientSetting will get you a list of all your Client Settings, but there won’t be any information on them. There’s a -Setting which will return all the settings of the specified type, but as it turns out, nothing to tell you which Client Setting contains that setting. So it turns out, you have to get every Client Setting, and interrogate it if it has the Computer Agent setting. With this code I was able to find the five Settings that had Computer Agent specified, and change the Default Application Catalog website point.

$allsettings = Get-CMClientSetting | select -ExpandProperty Name
foreach ($name in $allsettings) {
    if (Get-CMClientSetting -Setting ComputerAgent -Name $name) {
        write-host $name
    }
}

I could have gone farther with this, as the object returned will have a PortalURL property, but merely listing the Settings that had Computer Agent was sufficient for this. This could be generalized to look for any Client Setting with a particular setting, and further to look for specific values if needed.

Wim Update Automation – ConfigMgr

Our desktop support folks were looking to keep their Wims up to date for their desktop OSD, and our server folks tagged along for the ride. Of course we went ahead and automated the entire process.

We leverage all of the previous works of OSD Gurus to build a MDT based CTGlobal Image Factory. Those posts are all well covered – go forth and seek out the deploymentBunny (aka:Mikael Nystrom); Kent Agerlund; Ami Arwidmark; Johan Arwidmark, and so on – these are well covered and not repeated here.

The next step is to build even more automation around this. The full workflow is as follows…

  1. Azure Automation Scheduled Task triggers runbook monthly to kick it all off
  2. Clean up any existing test VMs in my test cluster… decommission the old tests
  3. Create multiple local running processes on the ImageFactory server
    1. One each to build the local VM
    2. One each to monitor for completion
  4. The completion monitor kicks off new runbooks via webhooks when complete
  5. Refresh OSImage Package properties + Refresh package source.
  6. Check status then deploy VM (ours are in VMWare) specifying test TS
  7. Last package in the test TS is a powerShell script that calls… another runbook
  8. Unit testing – for lack of a better term of the test VM with shiny new OS
    1. Verify .Net version is latest.
    2. Windows update verification against Microsoft Catalog per OS
    3. powerShell / WMF Version verification
    4. SMBv1 / IIS / WCF Frameworks is disabled checks
    5. Insert check of your own ūüôā
  9. If all unit tests pass – refresh Prod OSImage Package properties and source

 

To start with, gather all the MDT Task Sequence Ids from ImageFactory, the Operating Package ID from ConfigMgr, and some sort of unique identifier. I simply picked the OS Level being automated. Along with this… it does assume that you have managed to automate server VM builds in your environment.

## Servers to rebuild
$osList = @("2012","2016","2016-Core")

$oslist | ForEach-Object{
    $os = $_

    if ($os -eq '2012')
    {
        $taskID = 'Server2012'
        $osID = 'ABC0000E'
        $deployOS = '2012R2'
    }
    elseif ($os -eq '2016')
    {
        $taskID = 'Server2016'
        $osID = 'ABC00012'
        $deployOS = '2016'
    }
    elseif ($os -eq '2016-Core')
    {
        $taskID = 'Server2016-Core'
        $osID = 'ABC00010'
        $deployOS = '2016-Core'
    }

C:\Automation\ServerDecom.ps1 -computer $computer -rebuild $True

Quick note on the decommissioning part. My automation will flag for a rebuild system. What it does is instead of deleting the AD Computer object – it will instead reset the password. This is very important later on when the newly deployed test VM has a new IP address, and needs to dynamically update AD DNS. If you delete the computer object… say good bye to your computer’s access to update any existing DNS Records.

$sam = $computer+'$'
$password = ConvertTo-SecureString -String $sam -AsPlainText -Force
Get-ADComputer $sam -Credential $adCreds| Set-ADAccountPassword -NewPassword:$password -Reset:$true -Credential $adCreds

Moving on – the runbook still needs to kick off the MDT build. This takes a little prep work to have some local powerShell scripts saved on the Image Factory Server. The reason for doing this all is to keep from having a runbook run for excessive amounts of time waiting for the MDT build to complete.

Invoke-Command -ComputerName 'ImageFactory Server' -Credential $adCreds -ScriptBlock {
param($deployOS)
$process = "powershell.exe -file c:\scripts\$deployOS`.ps1"
$process2 = "powershell.exe -file c:\scripts\Monitor-$deployOS`.ps1"
([WMICLASS]"\\server\ROOT\CIMV2:win32_process").Create($process)
([WMICLASS]"\\server\ROOT\CIMV2:win32_process").Create($process2)
} -ArgumentList $deployOS

The reason this was done with process creation is that the remote powerShell session will end all processes on exit. However, if you kick off new process inside the remote session – they stay running as system.

The two scripts that do the local heavy lifting…
Build VM

Remove-Module CTImageFactory -ErrorAction SilentlyContinue
Import-Module "e:\ImgFactory\Scripts\CTImageFactory.psm1" -WarningAction SilentlyContinue
set-location "e:\ImgFactory\Scripts\"
Get-GlobalVariables
Start-Build -BuildType 'Single' -TaskSequenceID 'Server2016'

Monitor VM

do
{
Start-Sleep -Seconds 1200
$state = (Get-VM |where name -eq 'Server2016').State
}
until ($state -eq 'Off')

Invoke-RestMethod -Method Post -Uri 'https://s1events.azure-automation.net/webhooks?token=webhookToken' -Body (ConvertTo-Json -InputObject @{'osID'='ABC00010';'computer'='Test Server Name';'deployOS'='2016-Core'}) -ErrorAction Stop

The last bit calls the next runbook via webhook in the gravy train. This is where it needs to check in with the site server, and make sure the newly captured wim files are updated and distributed. Then move on to deploy the wim as a test for verification.

$siteCode = Get-AutomationVariable -Name 'ConfigMgr-SiteCode'
$siteServer = Get-AutomationVariable -Name 'ConfigMgr-SiteServer'

################### Update wim file
$imageProperties = Get-WmiObject -ComputerName $siteServer  -Namespace "root/SMS/site_$sitecode" -Class 'SMS_ImagePackage' -Credential $sccmServiceAccount |where-object -property PackageID -eq $osID
$imageProperties.ReloadImageProperties()
$imageProperties.RefreshPkgSource()

Import-Module "C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1" -Force
$drive = get-psdrive -name MPT -ErrorAction SilentlyContinue
If(!$drive){New-psdrive -Name $siteCode -PSProvider "AdminUI.PS.Provider\CMSite" -root $siteServer -Credential $sccmServiceAccount  -Description "SCCM Site"}
$location = $siteCode+":"
push-location
Set-location $location
$date = get-date -format d
Set-CMOperatingSystemImage -Id $osID -Description $date -version "I'm NEW and Shiny'"
Pop-Location
Remove-Psdrive MPT

# Check on status of distribution
$count = 0
do
{
$count
Start-Sleep -Seconds 120
$status = Get-WmiObject -Credential $sccmServiceAccount -Namespace root\sms\site_$siteCode -Query "SELECT PackageID,PackageType,State,ServerNALPath,LastCopied FROM SMS_PackageStatusDistPointsSummarizer where PackageID = '$($osID)'" -ComputerName $siteServer | select-object PackageID,PackageType,State,LastCopied,ServerNALPath
if(($status.State | Where-Object {$_ -eq 0}).count -gt 1){$complete = $true}
$count++
}until($complete -or $count -eq 10)
if(-not($complete)){throw "SCCM Package failed to update in time"}    

Phew! All that to update content, distribution, some descriptions, and check that its all where it needs to be. As long as it all completes – then I call our internal automated server build scripts. We use boot media attached to the VM to automatically call the appropriate task sequence. In addition, we set OSD TS Variables in a text file accessible from the OSD environment to quickly set all needed variables. No waiting for device membership collections and so forth.

The last step of the task sequence is to call.. you guessed it – another runbook via a webhook. This last step is all about verifying the final product. Does it have the latest updates? Did the MDT TS do everything I needed it to do? Lets find out. Also, this is where having clean AD DNS really comes into play.

$session = New-PSSession -ComputerName $computer -Credential $creds
$dotNetTest = Get-AutomationVariable -Name 'dotnetTest'

# Test for dotNet version
$dotNetValidation = Invoke-Command -Session $session -ScriptBlock {
param($dotNetTest)
$dotNetVersions = (Get-ChildItem 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP' -recurse | Get-ItemProperty -name Release -EA 0 |Where-Object { $_.PSChildName -eq 'Full'} |Select-Object Release).Release
If ($dotNetVersions -ge $dotNetTest) {$dotNetValidation = 'Pass'}
Else {$dotNetValidation = 'Failed'}
$dotNetValidation
} -ArgumentList $dotNetTest

Test for Microsoft Update CU

$updateValidation = Invoke-Command -Session $session -ScriptBlock {
$date = get-date -Format yyyy-MM
$updateTest = "$date Cumulative Update" # Needs to match $date + Cumulative Update *
$2012updateTest1 = "$Date Preview of Monthly Quality Rollup for Windows Server 2012 R2"
$2012updateTest2 = "$Date Security Monthly Quality Rollup for Windows Server 2012 R2"
$catalogURI = "https://www.catalog.update.microsoft.com/Search.aspx?q="
$hotfixList = Get-HotFix |Sort-Object -Property hotfixid
$list = $hotfixlist[($hotfixList.count -3)..($hotfixList.count)].HotfixID
$updateValidation = 'Failed'
$list |ForEach-Object {
    $KB = $_
    $uri = $catalogURI + $kb
    try {
        # have to usebaseicparsing and the raw content as IE has not run yet
        $site = Invoke-WebRequest -Uri $uri -UseBasicParsing
        $content = $site.RawContent
        If ($content -like "*$updateTest*") {$updateValidation = 'Pass'}
        ElseIf ($content -like "*$2012updateTest1*") {$updateValidation = 'Pass'}
        ElseIf ($content -like "*$2012updateTest2*") {$updateValidation = 'Pass'}
        }
    Catch {Write-Warning "Failed to lookup $KB"}
    }
}
$updateValidation
#endregion

Getting into the rest of the validations — enter your own as needed

$powerShellValidation = Invoke-Command -Session $session -ScriptBlock {
    $powerShellVersion = '5.1'
    [string]$major = $PSVersionTable.psversion.major 
    [string]$minor = $PSVersionTable.psversion.minor
    $powerShellTest = $major + ".$minor"

    If ($powerShellTest -eq $powerShellVersion) {$powerShellValidation = 'Pass'}
    Else {$powerShellValidation = 'Failed'}
    $powerShellValidation
    }

$features = Invoke-Command -Session $session -ScriptBlock {Get-WindowsFeature | Where-Object -Property 'InstallState' -eq 'Installed'}
    $smbTest = $features | Where-Object -Property Name -eq 'FS-SMB1'
    If ($smbTest) {$smbValidation = 'Failed'}
    Else {$smbValidation = 'Pass'}

    $iisTest = $features | Where-Object -Property Name -like 'web*'
    If ($iisTest) {$iisValidation = 'Failed'}
    Else {$iisValidation = 'Pass'}

    $wcfTest = $features | Where-Object -Property Name -like 'NET-WCF-*'
    If ($wcfTest) {$wcfValidation = 'Failed'}
    Else {$wcfValidation = 'Pass'}

$testing = @{
netvalidation = $dotNetValidation;
updatevalidation = $updateValidation;
powershellvalidation = $powerShellValidation;
smbvalidation = $smbValidation;
iisvalidation = $iisValidation;
wcfvalidation = $wcfValidation
}

$testing | ForEach-Object {
    If ($_.values -ne 'Pass') {$nextStep = 'Email'}
    Else {$nextStep = 'UpdateProd'}
}

At this point, if anything did not pass… it simply emails our ticketing queue to notify what did not pass so we can go fix the MDT TS or what ever the case may be. Otherwise, it moves on and updates the production OS package the same as the test.

Woah… kind of a lot.

So you want to automate against ConfigMgr… do you?

Automation is great. It really is. I’m using Azure Automation Hybrid Runbook workers for just about everything these days which is a post on its own, but wanted to touch base on some key interactions with ConfigMgr.

First, the premise. The automation servers are all running server 2016 core. So… no ConfigMgr console. No first popup of the console to set site assignments with powerShell. Actually, no real good way to get some default goodness with ConfigMgr at all. For the remainder of the post – the examples will all be dealing with my automation around updating OS Image Files.

So clearly I just copy over the powerShell module files with the .dlls in order to work better with the site server remotely. The simple route is to copy over the bin path¬†C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin — to the same location on your core server. This will at least get you access to the powerShell modules.

Simple… but this is really messy in an automated world. Important here is making a new-psdrive and providing your service account credentials for the task at hand. Many of the ConfigMgr cmdlets do not allow for a -credential switch to provide access in line.

Import-Module "C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1" -Force
$drive = get-psdrive -name $siteCode -ErrorAction SilentlyContinue
If(!$drive){New-psdrive -Name $siteCode -PSProvider "AdminUI.PS.Provider\CMSite" -root $siteServer -Credential $sccmServiceAccount  -Description "SCCM Site"}
$location = $siteCode+":"
push-location
Set-location $location
$date = get-date -format d
Set-CMOperatingSystemImage -Id $osID -Description $date -version "I'm a NEW and Shiny'"
Pop-Location
Remove-Psdrive $siteCode

Alternatively, you could certainly create a powerShell session with credentials; and then invoke a script block to the session. Sure – this works, but it opens up requiring the allowance of powerShell remoting for your service account. Just something to consider if you’re comfortable putting all your automation in script blocks.

Now… into the rabbit hole. I don’t like having to copy about files from a console. What happens if something changes in the ConfigMgr site version that updates the powerShell cmdlets… and now which servers need this copied where? There are many other reasons to keep this stuff off my core infrastructure, but honestly – I just don’t want to have to think about it. Keep it simple. Keep it to powerShell.

So some other good options. Obviously, Cim/Wmi work well since so much of ConfigMgr is accessible via this channel. They are also going to work with providing credentials! So letting a service account handle a well scoped task isn’t that big of a hurdle. What the hurdle is, is that not all Wmi objects will have the appropriate methods to write data back to your site server.

# Reload the image properties before refreshing the package on the distribution points
$imageProperties = Get-WmiObject -ComputerName $siteServer  -Namespace "root/SMS/site_$sitecode" -Class 'SMS_ImagePackage' -Credential $sccmServiceAccount |where-object -property PackageID -eq $osID
$imageProperties.ReloadImageProperties()
$imageProperties.RefreshPkgSource()

While this was great to get the first task done of updating the files in the system… there was no way to update the description or version via the Wmi object. When you pipe through to get-member – the properties exist, but there is no put() method.

Option five. The bottom of the rabbit hole I went down is to connect directly to the SMS Provider using WMI. This requires creating a SWbemServices object. Say what? No need to explain.. Microsoft Docs Page thank you docsMsft!

I rewrote the VB sample in powerShell a while back, and did a pull request. So side note – if you see something, write something, pull request back!

 
$siteCode = ''
$siteServer = 'server.domain'

$credentials = Get-Credential
$username = $credentials.UserName

# The connector does not understand a PSCredential. The following command will pull your PSCredential password into a string.
$password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($credentials.Password))

$NameSpace = "root\sms\site_$siteCode"
$SWbemLocator = New-Object -ComObject "WbemScripting.SWbemLocator"
$SWbemLocator.Security_.AuthenticationLevel = 6
$connection = $SWbemLocator.ConnectServer($siteServer,$Namespace,$username,$password)

 

With a connector in place, you can do something like below to update description/version and put()it back

$wim = $connection.Get("SMS_ImagePackage.PackageID='$osID'")
$wim.Properties_.Item("Version").value = $version
$wim.Properties_.Item("Description").value = $Description
$wim.Put_()

Here is the short of it. I can still provide credentials to run as what ever service principal I need, and then I get the benefit of being able to do.. well anything I want since its a direct connection against the SMS Provider itself.

There you have it folks. A full repertoire of how to get things done when coding in an automated way against ConfigMgr. Pick you poison wisely, or just mix and match as needed.
 

Strong Cryptography – .NET + powerShell

I spend a lot of time using Invoke cmdlets in powerShell. Over the past year there has been a need to address how the Invoke-RestMethod and Invoke-WebRequest handle SSL/TLS connections as service providers and API endpoints drop older versions of SSL and TLS. The good news is that providers are finally dropping these insecure channels, and the bad news is that Microsoft applications still default to allowing them as a client.

A few things for consideration. Disabling TLS 1.0 and all of SSL is rather simple Рa few Registry Keys and a reboot Рand done.  See the end of the post for the SCHANNEL Reg Keys.

It came as a surprise to me then when the Security Providers Keys are set, and then I started to get connection failures with a ‘Underlying Connection has Failed’ — very common wording for a SSL/TLS handshake failure. Whats going on powerShell? Why are you failing me?

Turns out… its because of .NET. If you pop open a powerShell session you can run

[Net.ServicePointManager]::SecurityProtocol

¬† ¬†— you will likely get the following output. “Ssl, Tls”

Not good powerShell. I said Ssl is disabled, and why wouldn’t you handshake to something better? Turns out powerShell isn’t as smart as you would think. You have a couple of options.

First — in the worst case if you still need to use Ssl… please don’t… but if you must, you can set a powerShell session to use TLS only by setting¬†

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 

But please don’t do this as a work around in your scripting. Please get rid of what ever other Ssl requirements are preventing a global system change.

Second — you can tell your OS to set the .Net Framework to use Strong Cryptography! You would think you only need to do this for Wow6432Node as who runs powerShell as a 32 bit process… but even if you check your environment, and are running as a 64 bit process it will still read the 32 bit version of .Net framework settings. Set the following, reboot, and your .Net calls from powerShell for the ServicePointManager will default to Tls, Tls11, Tls12.

Set-ItemProperty -Path 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord

Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord

 

 No more connections issues!

 

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client]
“DisabledByDefault”=dword:00000001
“enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server]
“Enabled”=hex(b):00,00,00,00,00,00,00,00
“DisabledByDefault”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client]
“DisabledByDefault”=dword:00000001
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server]
“Enabled”=hex(b):00,00,00,00,00,00,00,00
“DisabledByDefault”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client]
“DisabledByDefault”=dword:00000001
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server]
“DisabledByDefault”=dword:00000001
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client]
“DisabledByDefault”=dword:00000000
“Enabled”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server]
“DisabledByDefault”=dword:00000000
“Enabled”=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]
“Enabled”=dword:00000001
“DisabledByDefault”=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]
“DisabledByDefault”=dword:00000000
“Enabled”=dword:00000001

Microsoft (Azure) APIs

Wanted to share a finer thought of getting your head around all the wonderful, and some times frighteningly complicated Microsoft APIs. Its not so bad~

Recently I did a user group talk on this topic, and was shocked by the number of hands that were raised when I asked how many developers were in the room. An Azure user group, and almost every single person raised their hand. So I pushed them… how many have written their own API? Again, almost all hands were in the air. By the end though, one of them said – this is too much info! This is a quick break down of what I hoped they obtained out of the talk. In short… read the manual.

  1. That in the case of APIs, good documentation will make your day. Microsoft does an amazing job of documenting. Let that sink in. Yes. Yep. OK, go check it out Microsoft Docs.
  2. I wasn’t all surprised that with this many developers at an Azure user group that a lot of .Net(core)/C# developers were in attendance. If a particular language is what you know… go read the manual again on the docs page. The point being that Microsoft does amazing work documenting the REST API Reference for many languages, and supports many different client libraries. Such as:
  • .NET
  • Java
  • Node.js
  • Python
  • Azure CLI

 

In the end, it is all the same basic components/concepts no matter the language you develop in. Build and make a URI call with an action to an API endpoint, and include with it a header and some body data for the thing you are trying to do. For the third time (and done) go read the manual on your API. It will tell you everything you need to know.

Commit to Github Using API from Powershell

Continuing my search for doing automation in Powershell with as few dependencies as possible I turned to committing code to Github via the API.  I found various posts that outlined the basic process but not very good examples of getting it done, certainly none in Powershell.  So I extended the Powershell module I had already been working on, UMN-GitHub, to add support.  The short version is, just run the function Update-GitHubRepo.

The reference for master is ”refs/heads/master” however I would recommend committing to a test branch at a minimum and doing a pull request after that. ¬†Github has plenty of content on best practices around all that and is outside the scope of this post.¬†(use¬†Get-GitHubRepoRef to get a list of refs, choose the one you want)

Lets dig in a bit to what the function actually does.

Run $headers = New-GitHubHeader [token or credential switch] first to get the header you will need.

# Get reference to head of ref and record Sha
$reference = Get-GitHubRepoRef -headers $headers -Repo $Repo -Org $Org -server $server -ref $ref
$sha = $reference.object.sha
# get commit for that ref and store Sha
$commit = Get-GitHubCommit -headers $headers -Repo $Repo -Org $Org -server $server -sha $sha
$treeSha = $commit.tree.sha
# Creat Blob
$blob = New-GitHubBlob -headers $headers -Repo $Repo -Org $Org -server $server -filePath $filePath
# create new Tree
$tree = New-GitHubTree -headers $headers -Repo $Repo -Org $Org -server $server -path $path -blobSha $blob.sha -baseTree $treeSha -mode 100644 -type 'blob'
# create new commit
$newCommit = New-GitHubCommit -headers $headers -Repo $Repo -Org $Org -server $server -message $message -tree $tree.sha -parents @($sha)
# update head to point at new commint
Set-GitHubCommit -headers $headers -Repo $Repo -Org $Org -server $server -ref $ref -sha $newCommit.sha

One thing to note, if you open up¬†New-GitHubBlob you’ll notice I converted the file contents to base64. ¬†Since the contents need to be sent as text via json data …. there wasn’t a good way to define a boundary and say “this is the file contents” without it blowing up. ¬†Its not like you can attach a file. ¬†So converting it to base64 solved a lot of problems.

This first version of New-GitHubTree is not very advanced and only takes in one file.  Future version may take in multiple version.