An Alias and Function Change

At current, my site has 230 published posts. It’s 231 when this one goes live. Those are all written by me since mid 2014. I’m not sure of my per month average, but I can say for sure that I’ve never missed a month.

Outside of those 230 published posts, are 40 some drafts. Those are posts I started and never finished for one reason or another. Sometimes, I realized what I thought worked really didn’t, and therefore wasn’t able to complete the post. Well, in an effort to clean up the drafts, I’m publishing my first stupid draft turned post. Here’s goes… It’s learning from my failure — enjoy.

For just about as long as I can remember, I’ve always created an alias for my functions, just before the function is defined. This is of course when a function is defined outside of a module, such as when it’s defined in my $PROFILE script. In the below example, I do just that. After the alias is created, I define the function for which my alias can be used to invoke the function.

Set-Alias -Name saw -Value Show-AWord
Function Show-AWord {
    '!! A Word !!'
}

PS > saw
!! A Word !!

There’s another way to do this, as well. You can create the alias after the function’s been defined. You just swap the commands.

Function Show-EWord {
    '** E Word **'
}
Set-Alias -Name sew -Value Show-EWord

PS > sew
** E Word **

And here’s where the post went stupid.

I’ve always been mildly annoyed that I needed to have the code outside of the function, whether it’s before, or after, the function definition. I always wished there was a way to create aliases from inside the function.

Well there is, and there’s always been. I’ve just never given it much thought until about five minutes ago. This might be why I started this post; I didn’t think about it long enough. Here’s nearly the same function as above; however, now we’ll create the alias for the function, within the function. Because the Set-Alias cmdlet has a scope parameter, we can create a global alias from inside the function.

Function Show-GWord {
    Set-Alias -Name sgw -Value Show-GWord -Scope Global
    '$$ G Word $$'
}

PS > sgw
PS > # Nothing

Here’s about the time, I realized my problem. If you create an alias inside the function (using the Global scope [that’s how it “works”]), the alias is not going to exist until after the function has been invoked for the first time. Therefore, the function would have to be run like the below example. I pretty much removed a line outside the function, put it into the function, and then added another line outside the function. Ugh, not what I was after all.

PS > Show-GWord | Out-Null
PS > sgw
$$ G Word $$

So yeah, this post didn’t go as planned. No wonder it made its home in my drafts. It makes you wonder though. Why isn’t there a way to run some code inside a function when the function is being defined? Maybe because functions belong in modules and modules give you this ability when they’re imported, via their module manifest, and potentially, their ScriptsToProcess.

There you have it. A stupid draft I published.

AWS Write-S3Object Folder Creation Problem

If you were wondering what I occasionally thought about over the holiday break, while I was scrubbing my pool tiles—ugh, it was AWS and the Get- and Write- S3Object cmdlets. Why wasn’t I able to get them to do what I wanted!? Let me first explain my problem, and then my solution.

When you create a folder (and yes, I realize it’s not really a folder), in the AWS Management Console, you do so by clicking the “+ Create folder” button. This guy.

Simple stuff. In my case, you choose the AES-256 encryption setting and then give the folder a name. This folder, because I created it in this manner, is returned by the Get-S3Object cmdlet. So, let’s say we walked through this manual procedure and created a folder called S3ConsoleFolder. Here’s what Get-S3Object would return, providing the S3 bucket, we’re calling tst-bucket, had been empty from the start.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/

Because it could prove helpful, let’s say we also manually created a nested folder inside of our S3ConsoleFolder called S3CFA, as in the A folder of the S3ConsoleFolder. Consider it done. Here’s the new results of the Get-S3Object command. As you’ll see now, we return both the top-level folder, and the newly created, nested folder.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/

Now, let’s get one step closer to using Write-S3Object. Its purpose, according to its maker, Amazon, is that it “uploads one or more files from the local file system to an S3 bucket.” That, it does. Below I’ve indicated the top-level folder, the nested folders, and the files that we’ll upload. The complete path to this folder is C:\Users\tommymaynard\Desktop\TestFolder.

TestFolder
|__ FolderA
|	|__ TestFile2.txt
|	|__ TestFile3.txt
|__ FolderB
|	|__ TestFile4.txt
|	|__ TestFile5.txt
|	|__ TestFile6.txt
|	|__ FolderC
|		|__ TestFile7.txt
|__ TestFile1.txt

The below Write-S3Object command’s purpose is to get everything in the above file structure uploaded to AWS S3. Additionally, it creates the folders we need: TestFolder, FolderA, FolderB, and FolderC. I’m using a parameter hash table below to decrease the length of my command, so it’s easier to read. It’s in no way a requirement.

$Params = @{
    BucketName = 'tst-bucket'
    Folder = 'C:\Users\tommymaynard\Desktop\TestFolder'
    KeyPrefix = (Split-Path -Path 'C:\Users\tommymaynard\Desktop\TestFolder' -Leaf).TrimEnd('\')
    Recurse = $true
    Region = 'us-gov-west-1'
    ServerSideEncryption = 'AES256'
}

With the parameter hash table created, we’ll splat it on the Write-S3Object cmdlet. When completed, we’ll run the Get version of the cmdlet again, to see what was done.

Write-S3Object @Params
(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/FolderA/TestFile2.txt
TestFolder/FolderA/TestFile3.txt
TestFolder/FolderB/FolderC/TestFile7.txt
TestFolder/FolderB/TestFile4.txt
TestFolder/FolderB/TestFile5.txt
TestFolder/FolderB/TestFile6.txt
TestFolder/TestFile1.tst

Now, look at the above results and tell me what’s wrong. I’ll wait…

… Can you see it? What don’t we have included in those results, that we did when we created our folders in the AWS Management Console?

Maybe I was asking for too much, but I expected to have my folders returned on their own lines just like we do for S3ConsoleFolder/ and S3ConsoleFolder/S3CFA/. Remember, I’m lying down on my pool deck, scrubbing the pool tiles (it’s Winter, yes, but I live in southern Arizona), and I cannot for the life of me wrap my head around why I’m not seeing those folders on. their. own. lines. I expected to see these lines within my results:

TestFolder/
TestFolder/FolderA/
TestFolder/FolderB/
TestFolder/FolderB/FolderC/

Remember, I can create folders in the AWS Management Console and it works perfectly, but not with Write-S3Object. Well, not with Write-S3Object the way I was using it. I finally had an idea worth trying: I needed to use Write-S3Object to create the folders first, and then upload the files into the folders. It’s obnoxious. While that required more calls to Write-S3Object, I was okay with it, if it could get me the results I wanted. And ultimately, get my users the results I wanted them to have.

So let’s dump my TestFolder from S3 and start over. We’re here again.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/

We’ll start by creating our parameter hash table. After that, we’ll begin to use the $Param variable and its properties (its keys) to supply parameter values to parameter names. I don’t actually splat the entire hash table in this code section. Although the next three code sections go together, I’ve broken them up, so I can better explain them. This first section creates a $Path variable, an aforementioned parameter hash table (partially based on the $Path variable), and an If statement. The If statement works this way: If my S3 bucket doesn’t already include a folder called TestFolder/, then create it.

$Path = 'C:\Users\tommymaynard\Desktop\TestFolder'
$Params = @{
    BucketName = 'tst-bucket'
    Folder = (Split-Path -Path $Path -Leaf).TrimEnd('\')
    KeyPrefix = (Split-Path -Path $Path -Leaf).TrimEnd('\')
    Recurse = $true
    Region = 'us-gov-west-1'
    ServerSideEncryption = 'AES256'
}

# Create top-level folder (if necessary).
If ((Get-S3Object -BucketName $Params.BucketName -Region $Params.Region).Key -notcontains "$($Params.Folder)/") {
    Write-S3Object -BucketName $Params.BucketName -Region $Params.Region -Key "$($Params.Folder)/" -Content $Params.Folder -ServerSideEncryption AES256
}

Now, Get-S3Object returns my newly created, top-level folder. It was working so far.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/

This second section gets all of the directory’s names from my path. If there are duplicates, and there were, they’re removed by Select-Object‘s Unique parameter. Once I know these, I can start creating my nested folders after cleaning them up a little: splitting the path, replacing backslashes with forward slashes, and removing any forward slashes from the beginning of the path. With each of those, we’ll make sure the cleaned-up path doesn’t include two forward slashes (this would indicate it’s the top-level folder again [as TestFolder//]), and that it doesn’t already exist.

# Create nested level folder(s) (if necessary).
$NestedPaths = (Get-ChildItem -Path $Path -Recurse).DirectoryName | Select-Object -Unique
Foreach ($NestedPath in $NestedPaths) {

    $CleanNestedPath = "$(($NestedPath -split "$(Split-Path -Path $Path -Leaf)")[-1].Replace('\','/').TrimStart('/'))"

    If (("$($Params.Folder)/$CleanNestedPath/" -notmatch '//') -and ((Get-S3Object -BucketName $Params.BucketName -Region $Params.Region).Key -notcontains "$($Params.Folder)/$CleanNestedPath/")) {
        Write-S3Object -BucketName $Params.BucketName -Region $Params.Region -Key "$($Params.Folder)/$CleanNestedPath/" -Content $CleanNestedPath -ServerSideEncryption AES256
    }
}

And, now Get-S3Object returns my nested folders, too. Still working.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/
TestFolder/FolderA/
TestFolder/FolderB/
TestFolder/FolderB/FolderC/

This last section only serves to upload the files from the EC2 instance to the S3 bucket and into the folders we’ve created. Like the other code did in the last two sections, this doesn’t check that files already exist. It’ll happily write, right over them without warning. I didn’t need this protection, so I didn’t include it.

Write-S3Object -BucketName $Params.BucketName -Region $Params.Region -Folder $Path -KeyPrefix "$($Params.Folder)/" -ServerSideEncryption AES256 -Recurse

Now that my folders are created and the files are uploaded, I get the results I expect. I can see all the folders on their own lines, as well as that of the files.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/
TestFolder/FolderA/
TestFolder/FolderA/TestFile2.txt
TestFolder/FolderA/TestFile3.txt
TestFolder/FolderB/
TestFolder/FolderB/FolderC/
TestFolder/FolderB/FolderC/TestFile7.txt
TestFolder/FolderB/TestFile4.txt
TestFolder/FolderB/TestFile5.txt
TestFolder/FolderB/TestFile6.txt
TestFolder/TestFile1.txt

I do hope I didn’t overlook an easier way to do this, but as history has proved, it’s quite possible. Here’s to hoping this can help someone else. I felt pretty lost and confused until I figured it out. It seems to me that AWS needs to iron this one out for us. No matter how it’s used, Write-S3Object should create “folders” in such a way that they are consistently returned by Get-S3Object. That’s whether they’re created before files are uploaded (my fix), or simply created as files are uploaded.

And, cue the person to tell me the easier way.

Comfortably Save a Pester Test

I’ve spent some time with Pester this week. While exploring the OutputFile parameter, it quickly became clear that the best I seemed to be able to output was some form of XML, and honestly, that wasn’t good enough for me in my moment of discovery. While I intend to make myself a Pester expert in the coming year (2018), there’s somethings I don’t know 100% yet, and so I understand it’s possible that I sound like an idiot, as I may not know about x in regard to Pester.

While working with Invoke-Pester, I came across the -PassThru parameter. Its purpose in life, in relation to the Invoke-Pester command, is to create a PSCustomObject. Now that’s, something I can work with. The idea here is to invoke Pester against an AWS instance, export my object (to an XML standard I can deal with [thank you Export-Clixml]), and write it to S3. Then, I can download the file to a computer that’s not my newly configured AWS instance, and check for what tests passed and failed. This, without the need to RDP (Remote Desktop), to the instance, and visually and manually check to see what did and didn’t work from there. We’re getting closer and closer to RDP being a security incident.

My example is not going to include an actual Pester run, so instead we’ll jump directly to the Invoke-Pester command and what we’ll do after that command has executed. This first example is a fairly standard way of invoking Pester (with parameters) and creating an output file. Again, this isn’t the output I’m after.

PS > Invoke-Pester -Script @{Path = 'C:\WkDir\HostAcceptance.Tests.ps1'; Parameters = @{Project = 'trailking'; Environment = 'tst'}} -OutputFile 'C:\WkDir\PesterOutputXml.xml'

Instead we’re going to include some other parameters, to include PassThru, Show, and OutVariable. PassThru will provide us an object that contains all of the Pester results, Show with the None value will hide the Pester test results, and OutVariable will get that object (that again, contains all of the Pester results) into the PesterResults variable.

PS > Invoke-Pester -Script @{Path = 'C:\WkDir\HostAcceptance.Tests.ps1'; Parameters = @{Project = 'trailking'; Environment = 'tst'}} -PassThru -Show None -OutVariable PesterResults

I was mainly okay with the above command; however, it was still writing my object to the host program, and I wanted nothing to be displayed, at all. It’s too bad that option isn’t in there by default, but I’m okay with improvisation. Again, I know PowerShell better than Pester, so there is the possibility I just don’t know enough at the immediate moment to not have done this a potentially better way. Anyway, the below example removes all the output to the host, but still stuffs my results into the PesterResults variable.

PS > [System.Void](Invoke-Pester -Script @{Path = 'C:\WkDir\HostAcceptance.Tests.ps1'; Parameters = @{Project = 'trailking'; Environment = 'tst'}} -PassThru -Show None -OutVariable PesterResults)

Okay, now what? Everything I need and more is now in the $PesterResults variable. Next, we’ll export it into an XML format I (as in Import-Clixml), can deal with.

PS > $PesterResults | Export-Clixml -Path 'C:\WkDir\PesterExport.xml'

Now that it’s in a usable format, I’m going to read it back in. You can go ahead and pretend that I’ve moved my exported XML file from the computer on which it was created, and now I’m about to read it in on a different computer. This visualization is as though it was uploaded to AWS S3, and downloaded from S3 on a different device. We’ll say it’s on my Desktop.

$PesterResults = Import-Clixml -Path 'C:\Users\tommymaynard\Desktop\PesterStuff\PesterExport.xml'
Foreach ($Result in $PesterResults.TestResult) {
    [System.Array]$Object += [PSCustomObject]@{
        Describe = $Result.Describe.TrimEnd(':')
        Context = $Result.Context.TrimEnd(':')
        It = $Result.Name.TrimEnd(':')
        Should = $Result.Result
        Time = $Result.Time
    }
} 

$Object
# $Object | Format-Table -Autosize
# $Object | Export-Csv -Path 'C:\Users\tommymaynard\Desktop\PesterStuff\PesterExport.xml' -NoTypeInformation

In the above example, we do a few things. We import the file we created out in AWS into a variable on my local computer, and then begin to iterate through its contents. For each entry, we add other object to my Object variable. Each object will contain a Describe, Context, It, Should, and Time property which is all obtained from the TestResult property of my PesterResults variable.

The last three lines are a few ways to handle the output (2 and 3 are commented out): (1) display it in the host, (2) display it in the host in an easier to read format, and (3) write the objects to CSV. Neat, right. As I continue to better learn Pester… I might just be back here. I can’t be 100% that this is the best way to save off and deal with the results of the Pester tests, but we’ll see!

Parse Net.exe Accounts (for Pester)

In a recent Pester test, I needed to verify that three settings in net.exe accounts were properly set. These included Lockout threshold, Lockout duration (minutes), and Lockout observation window (minutes). Well, now that I have my answer, I thought I would document it here. Before I show you how I handled this task, based on varying versions of PowerShell, I’ll show you the default output that I needed to parse.

PS > net.exe accounts
Force user logoff how long after time expires?:       Never
Minimum password age (days):                          0
Maximum password age (days):                          42
Minimum password length:                              0
Length of password history maintained:                None
Lockout threshold:                                    3
Lockout duration (minutes):                           30
Lockout observation window (minutes):                 30
Computer role:                                        SERVER
The command completed successfully.

I needed the above output to be parsed, and when that was done, I only needed the values of the three previously mentioned Lockout settings to be displayed. The below code indicates that if PowerShell is a version greater than 4.0 the ConvertFrom-String cmdlet can be used. It’s not necessary, but it was good to practice using a cmdlet I hardly every use. If the PowerShell version isn’t greater than 4.0, we’ll use a temporary variable and do the parsing ourselves. In the end and regardless of version, we’ll get our results. I’m using [PSCustomObject], but I am confident this test will never run with a version of PowerShell less than that of 3.0. This is happening in AWS with a Server 2012 R2 AMI and as we know, 2012 R2 includes PowerShell 4.0 by default.

If ($PSVersionTable.PSVersion.Major -gt 4) {
    $AcctSettings = net.exe accounts | ForEach-Object {
        ConvertFrom-String -InputObject $_ -Delimiter ': +' -PropertyNames Setting,Value
    }
} Else {
    $AcctSettings = net.exe accounts | ForEach-Object {
        $TempVar = $_ -split ': +'
        [PSCustomObject]@{Setting = $TempVar[0]; Value = $TempVar[1]}
    }
}
($AcctSettings | Where-Object {$_.Setting -eq 'Lockout threshold'}).Value
($AcctSettings | Where-Object {$_.Setting -eq 'Lockout duration (minutes)'}).Value
($AcctSettings | Where-Object {$_.Setting -eq 'Lockout observation window (minutes)'}).Value
3
30
30

This task was being done for Pester, so while we’re here, let me show it to you inside the Pester It Block.

# Account lockout policies.
It 'Checking the account lockout threshold, duration, observation window settings:' {
    If ($PSVersionTable.PSVersion.Major -gt 4) {
        $AcctSettings = net.exe accounts | ForEach-Object {
            ConvertFrom-String -InputObject $_ -Delimiter ': +' -PropertyNames Setting,Value
        }
    } Else {
        $AcctSettings = net.exe accounts | ForEach-Object {
            $TempVar = $_ -split ': +'
            [PSCustomObject]@{Setting = $TempVar[0]; Value = $TempVar[1]}
        }
    }
    ($AcctSettings | Where-Object {$_.Setting -eq 'Lockout threshold'}).Value | Should -Be 3
    ($AcctSettings | Where-Object {$_.Setting -eq 'Lockout duration (minutes)'}).Value | Should -Be 30
    ($AcctSettings | Where-Object {$_.Setting -eq 'Lockout observation window (minutes)'}).Value | Should -Be 30
} # End It.

That’s it! Now you can parse net.exe accounts, too!!

Parse Computer Name for Project Name

On some days… I just want full on project redo. It’s amazing how many decisions you’d make differently in those initial project meetings, once you’ve begun delivering results. Why, oh why, did we allow for hyphens in project names!?

Here’s my problem and how I fixed it. Let’s say we have four projects and their names are those listed below.

TRAILKING
ARF-SOIL
INTERALX
SECTRAIN

Now, let’s consider that we use these as our host names, or computer names; however, we append some information onto these four strings. For our terminal hosts, we’ll add -TH-01 and for our compute hosts, we add -CH-01. Therefore, we’d have two computer names for each project. For the TRAILKING and ARF-SOIL projects, we’d have the following four computers.

TRAILKING-TH-01, TRAILKING-CH-01, ARF-SOIL-TH-01, and ARF-SOIL-CH-01

Now, let’s consider we need to parse these computer names later on to help determine the project name. Can you see the problem, because I didn’t initially. The little extra coding I had to do is why we’re here today. You know, someone might need it one day, too.

If I split the full computer name at the first hyphen, and assume index 0 is the project name, then the ARF-SOIL project, is only going to be the ARF project. That’s not going to work. Take a look at my one-off solution. I hate these, but sometimes, it’s just too late to fix a project problem. Hindsight, man.

$String1 = 'TRAILKING-TH-01'
$String2 = 'TRAILKING-CH-01'
$String3 = 'ARF-SOIL-TH-01'
$String4 = 'ARF-SOIL-CH-01'
$String5 = '-ARF-SOIL-CH-01'

$String = Get-Random -InputObject $String1,$String2,$String3,$String4,$String5
$TempArray = $String.Split('-')

If ($TempArray.Count -eq 3) {
    $Project = $TempArray[0]

} ElseIf ($TempArray.Count -eq 4) {
    $Project = "$($TempArray[0])-$($TempArray[1])"

} Else {
    Write-Warning -Message "Unable to properly parse computer name: $String."
    Write-Verbose -Message "$BlockLocation Unable to properly parse computer name: $String."
}

$Project

As we repeatedly run this code in the ISE, or Visual Code, it’ll properly parse our computer names. If the string is split at its hyphens, and we’re left with three parts (TRAILKING, CH, and 01), then we know the first part is the project name. If the string is split at its hyphens, and we’re left with four parts (ARF, SOIL, TH, and 01), then we know the first two parts, combined with a hyphen, is the project name.

That was it. Happy Thanksgiving!

Determine if AWS EC2 Instance is in Test or Prod Account

It recently became apparent that I need a way to determine if I’m on an AWS TST (Test) EC2 instance, or a PRD (production) EC2 instance. The reason this is necessary is so that I can include a function to upload a file, or a folder, to an S3 bucket and ensure the portion of the bucket name that includes the environment, is included, and is correct. Therefore, I needed a function to be able to determine where it was running: in TST or PRD. Had I known I would need this information earlier, I would’ve had the CloudFormation template write this information to the Windows Registry or a flat file, so I could pick it up when needed. Had I considered an option such as this, I wouldn’t be wracking my brain now to try to determine a way to gather this information, when I didn’t leave it somewhere.

The computer names are the same in both environments (in both AWS accounts), so any comparison there doesn’t work. The TST servers and the PRD servers have the same name. After some thought, I came up with my three options:

1. Return the ARN from the metadata, and use that to determine whether I am on a TST or PRD EC2 instance, by extracting the AWS account number from the ARN and comparing it to two known values. This would allow me to determine which account I’m in—TST or PRD.

2. Look at the folders in C:\support\Logs. My functions, that are invoked inside the UserData section of a CloudFormation template, include logging that creates folders in this location named “TST” on a test EC2 instance and named “PRD” on a production EC2 instance.

3. Read in the UserScript.ps1 file (the UserData section in the CloudFormation template), and compare the number of the ‘tst’ and ‘prd’ strings in the file’s contents. Based on my UserData section, if there are more ‘tst’ strings, I’m on a TST EC2 instance, and if there are more ‘prd’ strings, I’m on a PRD EC2 instance.

Seriously Tommy, leave yourself some information on the system somewhere already. You never know when you might need it.

I decided I would use two of the three above options, providing myself a fallback if it were ever necessary. Who knows, I may not retire from my place of employment, and I’d like my code to continue to work as long as possible, even if I’m not around to fix it. I was worried about hard coding in those AWS account numbers, but less so, when I added a check against my UserScript.ps1 file (again, the UserData section of a CloudFormation document) for the count of ‘tst’ vs. ‘prd’ strings. Maybe it’ll never be used, but if needed, it’ll be there for me.

Here’s my first check in order to determine if my code is running on a TST or PRD EC2 instance. Feel free to look past my Write-Verbose statements. I used these for logging. So you have a touch of information, I’m creating the name of an S3 bucket, such as <projectname>-<environment>. Right now, we’re after the <environment> section, which I’m out to store in $Env. The <projectname> portion comes from parsing the EC2 instance’s assigned hostname.

#region Determine S3 Bucket using ARN and Account Number.
Write-Verbose -Message "$BlockLocation Determing the S3 Bucket Name using the ARN and account number."
$WebRequestResults = (Invoke-WebRequest -Uri 'http://169.254.169.254/latest/meta-data/iam/info' -Verbose:$false).Content
$InstanceProfileArn = (ConvertFrom-Json -InputObject $WebRequestResults).InstanceProfileArn
$AccountNumber = ($InstanceProfileArn.Split(':'))[4]
Write-Verbose -Message "$BlockLocation ARN: $InstanceProfileArn."
Write-Verbose -Message "$BlockLocation Account Number: $AccountNumber."`

If ($AccountNumber -eq '615613892375') {
    $Env = 'tst'
} ElseIf ($AccountNumber -eq '368125857028') {
    $Env = 'prd'
} Else {
    $Env = 'unknown'
    Write-Verbose -Message "$BlockLocation Unable to determine S3 Bucket Name using the ARN and account number."
} # End If.
#endregion

The following region’s code is wrapped in an If statement that will only fire if the $Env variable is equal to the string “unknown”. Notice that in the above code, $Env will be set to this value if neither of the AWS account numbers matches my hard coded values. No, those aren’t real AWS account numbers—well, not mine at least, and yes, they could’ve come in via function parameters.

#region If necessary, determine using 'tst' vs. 'prd' in UserScript.ps1.
If ($Env -eq 'unknown') {
    Write-Verbose -Message "$BlockLocation Determing S3 Bucket Name by comparing specific strings in the UserScript.ps1 file."
    $FileContent = Get-Content -Path "$env:ProgramFiles\Amazon\EC2ConfigService\Scripts\UserScript.ps1"
    $MatchesTst = Select-String -InputObject $FileContent -Pattern 'tst' -AllMatches
    $MatchesPrd = Select-String -InputObject $FileContent -Pattern 'prd' -AllMatches
    Write-Verbose -Message "$BlockLocation Comparing ."

    If ($MatchesTst.Matches.Count -gt $MatchesPrd.Matches.Count) {
        $Env = 'tst'
    } ElseIf ($MatchesTst.Matches.Count -lt $MatchesPrd.Matches.Count)  {
        $Env = 'prd'
    } Else {
        Write-Warning -Message "Unable to determine account number by comparing specific strings in the UserScript.ps1 file."
        Write-Verbose -Message "$BlockLocation Unable to determine account number by comparing specific strings in the UserScript.ps1 file."
    }
} # End If.
#endregion

If the above code fires, it will read in the contents of my UserScript.ps1 file. Again, this script file contains exactly what’s in the UserData section of my instance’s CloudFormation template. Once we have the file’s contents in a variable, we’ll scan it two times. On the first check, we’ll record the matches for the string ‘tst’. On the second check, we’ll record the matches for the string ‘prd’. The way I’ve written my UserData section, if there are more ‘tst’ strings, I’m on a TST EC2 instance, and if there are more ‘prd’ strings, I’m on a PRD EC2 instance. There’s a nested If statement that does this comparison and tells me which it is by assigning the proper value to that $Env variable.

So, after all that, I think I’ll just try to predetermine what information might be needed in the future and drop that into a flat-file or put it into the registry… just in case, it’s ever needed. This task was obnoxious but worthy of sharing.

Potentially Avoid Logging Plain Text Passwords

A few weeks ago I spoke at the Arizona PowerShell Saturday event where I introduced the newest version of my Advanced Function template — the 2.0 version. You can check it out here: http://tommymaynard.com/an-advanced-function-template-2-0-version-2017. It’s headed toward 200 downloads — not bad. Well, there’s going to need to be a newer version sooner rather than later, and here’s why.

The Advanced Function template writes out a few informational lines at the top of its log files. If you didn’t know, a big part of my template is the logging it performs. These lines indicate what, when, who, and from where, a function was invoked. In addition, it also logs out every parameter name and every parameter value, or values, that are included, as well. This was all very helpful until I pulled a password out of AWS’ EC2 parameter store — it’s secure — and fed it to the function with the logging enabled. It was a parameter value to a parameter named password, and it ended up in the clear, in a text file.

I know. I know. The function should only accept secure strings, and it will, but until then, I took a few minutes to write something I’ll be adding to the 2.1 version of my Advanced Function template. You see, I can’t always assume other people will use secure strings. My update will recognize if a parameter name has the word password, and if it does, it will replace its value in the logging with asterisks.

More or less, let’s start with the code that failed me. For each key-value pair in the $PSBoundParameters hash table, we write the key and its corresponding value to the screen.

Alright, now that we have our function in memory, let’s invoke the function and check out the results.

In the above results, my password is included on the screen. That means it could end up inside of an on disk file. We can’t have that.

Now, here’s the updated code concept I’ll likely add to my Advanced Function template. In this example, if the key includes the word password, then we’re going to replace its value with asterisks. The results of this function are further below.

In these results, our password parameter value isn’t written in plain text. With that, I guess I’ll need to add this protection. By the way, if you didn’t notice, Password is underlined in green in the first and third image. This is neat; it’s actually PSScriptAnalyzer recognizing that I should use a SecureString based on the fact that the parameter is named Password. I can’t predict what everyone will do, so what’s a couple more lines to potentially protect someone from storing something secure, in an insecure manner.

An Advanced Function Template (2.0 Version)

Welcome. If you’re here for the download, it’s toward the bottom of this post.

Today’s post goes hand in hand with a session I gave at the Arizona PowerShell Saturday event on Saturday, October 14, 2017. I didn’t do it previously, but this year especially, it made sense to have a post at tommymaynard.com, as a part of my session at the event. I wanted a place to offer my advanced function template for download, and so this, is it. If you couldn’t attend the event and be a part of the session yourself, then this may be the next best alternative. Well, for my session anyway. This event included sessions from Jason Yoder, Will Anderson, and Jason Helmick. While we’re at it — naming names — many thanks to Thom Schumacher for his role in organizing this event.

Toward the end of 2016, I spent some nights and weekends, and moments in the office too, writing a PowerShell advanced function template. Its main purpose was to include built-in function logging. You see, I wanted logging, but I didn’t want an external logging function to do it, and so I decided that every one of my functions would use the same template, and therefore, I could offer consistent logging capabilities across all my functions. These include my own functions, and even those written and put in place for my coworkers. At last check, I’ve snuck 40 plus tools into production. These include PowerShell functions for Active Directory, Group Policy, Exchange, SharePoint, Office 365, Amazon Web Services, VMware, and general operating system and management needs. There’s always something to automate, and now, when they do get automated, each includes the same base functionality.

I liked it, I use it, and I even made my advanced function template available for download on its original post. After some use, I came to realize that it could’ve been better. If you’ve been at this scripting and automation game for awhile, then you understand that automation, even when it’s done, is never really done. There’s always room for improvement, even if there isn’t always time to execute that mental list of changes, fixes, and increased functionality you want to add to already written automation.

The first thing I needed, which I didn’t even know I needed at first, was a function, and I’m not talking about the template code. I’m talking about a way to demonstrate both advanced function template versions (1.0 and 2.0), using the same non-template code. At first, I thought I’d just walk though the code in my 2.0 version of my advanced function template, but really, it made sense to use an easy to understand previously written function as an example, running in both the 1.0 and 2.0 versions of my advanced function template. At nearly the same time I was prepping for PowerShell Saturday, I had written a function that created random passwords — I know, I know… there’s a bunch of these already. I took that function’s code and wrapped it in my 2.0 version, as it had already been written with my 1.0 version, for use in my session at the PowerShell Saturday event.

All the files I used, are included in the below, free from viruses, zip file. This includes the ArizonaPowerShellSaturday.ps1 file that I used to run all the various commands, the New-RandomPassword1.0.ps1 file (uses the 1.0 version of advanced function template), and New-RandomPassword2.0.ps1 file (uses the 2.0 version of advanced function template), and the blank AdvancedFunctionTemplate2.0.ps1 — this is the one you’re likely after. If you attempt to use the first file mentioned, ArizonaPowerShellSaturday.ps1, then you’ll need to modify the first region, where the variables are assigned, so that they point to the other three files, wherever you decided to save them. Also, there’s a couple references to an alias I use, called code. I don’t believe this is a built-in alias, so the line won’t work as expected on other people’s systems. Know that the idea behind those lines is to open the referenced file inside of Visual Studio Code.

ArizonaPowerShellSaturday2017AllFiles (4087 downloads )

Update: I was asked at the PowerShell Saturday event, what kind of license I had. Ugh, none. But, for the sake of those that need it, let’s distribute this under the MIT License further below.

Update: The built-in ability to do logging that’s in my Advanced Function template writes all parameter names and associated values to the screen, file, or to both the screen and file. This means that if you’re passing secure data as a parameter value, that it needs to be done in a secure manner, or it’s going to appear in the logs. I intend to put in a stopgap for this, but it may not be perfect. You can read more here: http://tommymaynard.com/potentially-avoid-logging-plain-text-passwords-2017. Watch for a link on this post, and that one, to the newest post that’ll include the 2.1 versions!

Update: And, here’s the link!

An Advanced Function Template (Version 2.1 -and -gt)

Copyright 2017

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Add the ISE’s Ctrl + M to Visual Studio Code

As suspected, by me at least, the more I use Microsoft Visual Studio Code, the more I’m going to want to modify it. Remember, I just came into the light with the recent addition of Region support in version 1.17. This desire to modify is no more true than seen in an edit I made today to the keybindings.json file. This file allows one to override the default keyboard shortcuts in order to implement “advanced customizations.” It’s pretty awesome, and well appreciated.

Today I added the section for the Ctrl + M keyboard combination, such as can be seen in the last, or third, section of the below JSON. What’s this do, right? If we think back to the ISE (Microsoft’s Integrated Scripting Environment) you may remember that Ctrl + M collapsed all collapsible sections in the current script, or function. With this change in Visual Studio Code, I can now continue to use Ctrl + M to quickly collapse all the sections in my active function, or script. This is until I realize what the default action of Ctrl + M — Toggle Tab Key Moves Focus — actually does, and I find it necessary. I do want to mention, that I could’ve just went with the new keyboard combination of Ctrl + K Ctrl + 0. Ugh, no thanks for now.

[
    { "key": "ctrl+`",      "command": "workbench.action.terminal.focus",
                               "when": "!terminalFocus"},
    { "key": "ctrl+`",      "command": "workbench.action.focusActiveEditorGroup",
                               "when": "terminalFocus"},
    { "key": "ctrl+m",      "command": "editor.foldAll",
                               "when": "editorTextFocus"} 
]

While we’re here, I might as well mention (a.k.a. help myself remember), what the first two sections in my keybindings.json file do, as well. These allow me to use Ctrl + ` to switch focus between the editor on top, where I write my code, and the terminal below, where I can run PowerShell commands interactively. While that’s all for this post, I won’t be surprised if I’m back here updating it with new additions I make to my keybindings.json file.

As a newbie to Visual Studio Code for PowerShell development, I already don’t like even seeing the ISE. It’s pretty amazing what Region support in Visual Studio Code did to me.

Visual Studio Code Regions

It’s happened.

Microsoft has figured out how to get regions to work in Visual Studio Code. I thought it, and I may have even said it too, but it’s been my holdout for not using Microsoft’s code editor for PowerShell. While I’ve been using Visual Studio Code for my AWS YAML creation without regions, I hadn’t been ready to give up the ISE (Integrated Scripting Environment). Well, as of today, those days are over.

So what’s a region, right? It’s an easy way to collapse a section of code that isn’t collapsible by default. I greatly suspect I first learned about them from Ed Wilson — the original scripting guy. Here’s a couple examples from my old friend, the ISE. In the first example you can see the region sections aren’t collapsed and therefore display the commands. In the second image, you can’t see the commands at all. This becomes quite helpful when the region is loaded full with code and commands that simply don’t always need to be seen.

Here’s the same examples in Visual Studio Code.

What a glorious day! After being all set to use the ISE for the upcoming Arizona PowerShell Saturday event (2017), due to a lack of region support in VS Code, I’m glad to report that I’m going to go ahead and use it for my session. I didn’t see that one coming!

Finally, here’s an example of nested regions. I often do this as well, and these seem to work as expected.