Tag Archives: AWS

Change Prompt on Module Import

To me, my PowerShell prompt is quite important. I’ve written about it nine times already. While I’m not going to write about it again, so much, I am going to focus on an updated prompt I’ve created for a recent project. I couldn’t help but take a few things into this project from my prompt, and that’s why that’s been mentioned.

I recently received a screen capture from someone running into a problem using one of the tools I’ve written. Sure, it needs some error checking, I won’t deny that, but it was a very obscure and unforeseen problem. You know, how we often find out about error conditions.

This tool, or function, can be run on an Amazon Web Services EC2 instance within a project, to determine the status of its partner EC2 instance. When it works, it returns the status of the other instance, to include things like running, stopping, stopped, etc. There’s also a couple other companion functions that can start and stop the partner instance. The second of the two machines has very high specifications, and so we ask that our users shut down those secondary instances when they’re not running experiments. They pricey.

The problem is that when I look at this error, the prompt doesn’t tell me enough. I can only tell it’s PowerShell — the PS in the prompt — and the current path — some of which has been hidden in this first image. I want more information without having to ask the user, and so I’ve added that in.  Here’s the default prompt up close.

The new prompt includes the username (to the left of the @), the computer name (to the right of the @), the project name (while it’s not in this example, it’s normally a part of the computer name), the path, and whether the user is an admin (#) or not ($). Now, when I receive a PowerShell screen capture, I already have a few of my first questions answered.

All of the functions, such as the one that was run that generated this error, are a part of the same PowerShell module. There’s somewhere near 20 of them so far, and I keep finding reasons to add new ones. If you don’t know, creating a PowerShell script module that includes a module manifest file (a .psd1), allows one to include scripts that execute just before a module is imported. This, whether the module is imported manually using Import-Module, or by invoking a function from within the module, when the module hasn’t yet been imported.

Let’s take a look at my SetPrompt.ps1 script file that executes when my PowerShell module is imported.

Function prompt {
    # Determine Admin; set Symbol variable.
    If ([bool](([System.Security.Principal.WindowsIdentity]::GetCurrent()).Groups -match 'S-1-5-32-544')) {
        $Symbol = '#'} Else {$Symbol = '$'
    }

    # Create prompt.
    "[$($env:USERNAME.ToLower())@$($env:COMPUTERNAME.ToLower()) $($executionContext.SessionState.Path.CurrentLocation)]$Symbol "
} # End Function: prompt.

With this script in place and a ScriptsToProcess entry in the module’s manifest file, pointing to this file, I can be ensured that the user’s prompt will change the moment my module is imported. From here on out, I can rest assured that if a user — of these machines at least — sends us a screen capture that it’ll include relevant pieces of information I would have had to ask for, had this prompt not been in place.

There’s a final thought here. When the user is done with this PowerShell module, they’re still going to have this prompt. In my case, it’s perfectly suitable because my users won’t be in PowerShell, unless they’re issuing commands from the module. At least, I can’t imagine they will be. The only other thoughts I’ve had about this “problem” would be to (1) teach users to remove the module, and have code in the prompt monitor whether the module is loaded or not, and revert the prompt if the module is removed, or (2) revert the prompt if a command outside of my module is invoked.

That’s it for today. I don’t have it shown here, but it’s neat to see the prompt alter itself when the module is loaded. Something to keep in mind, if you find yourself in a similar situation.

Update: I was annoyed that $HOME, or C:\Users\tommymaynard, was being displayed as the full path, so I made some additional modifications to the prompt that’s being used for this project. It’ll now look like this, when at C:\Users\tommymaynard (or another user’s home directory).

Here’s the new prompt function, to include a better layout.

Function prompt {
    # Determine Admin; set Symbol variable.
    If ([bool](([System.Security.Principal.WindowsIdentity]::GetCurrent()).Groups -match 'S-1-5-32-544')) {
        $Symbol = '#'
    } Else {
        $Symbol = '$'
    }

    If ((Get-Location).Path -eq $env:USERPROFILE) {
        $Path = '~'
    } Else {
        $Path = (Get-Location).Path
    }

    # Create prompt.
    "[$($env:USERNAME.ToLower())@$($env:COMPUTERNAME.ToLower()) $Path]$Symbol "
} # End Function: prompt.

ScriptsToProcess Scheduled Task

Before we really get started, I think it’s helpful to first mention that I was recently deep in a project at work that doesn’t include Active Directory, but does include Windows clients. Therefore, I had do things in ways that could have been made easier had I had Active Directory to leverage.

I’ve long known that there’s a ScriptsToProcess option for PowerShell script modules that use a module manifest file. First, a script module is most often a collection of functions inside a script written, PowerShell module — a .psm1 file. Second, a .psd1 file is the module manifest file, and it contains information, among other things, about the module itself. One of the things that can be used in this file, is ScriptsToProcess, where a path, or paths, to a .ps1 file(s) can be included. This ultimately means a script can be run the moment your module is imported. ScriptsToProcess allows for an environment to be set up the way you want, prior to anyone actually using the functions in your module.

I quickly realized that in that newer, AWS project, that I should’ve had some sort of automation create a scheduled task on my EC2 instances. What I wish had been automated was the regularly scheduled downloading of my main PowerShell module for these projects from S3 to the instances. A task such as this would keep me, or a coworker, from ever needing to manually update the PowerShell module on these instances again. Instead, a scheduled task would just download the module a few times a day, whether or not it had been updated. If it had, then suddenly my instances would have the newest functions, with no continued manual work on my part.

And, that led me to wonder, can I create a scheduled task on an instance when a PowerShell module is imported, as a part of the ScriptsToProcess .ps1, that can be set to run at the module import? That answer, is no.

Just kidding. It’s a yes! You can create a scheduled task on a computer the moment a PowerShell module is imported, thanks to ScriptsToProcess. There’s a bunch of parts and pieces to this, but what I’m going to include is the script file ScriptsToProcess executes, when my module imports.

# Create scheduled task, if able and necessary (admin).
If ([System.Boolean](([System.Security.Principal.WindowsIdentity]::GetCurrent()).Groups -match 'S-1-5-32-544')) {

    $TaskName = 'EC2General PowerShell Module Update'
    If (-Not([System.Boolean](Get-ScheduledTask -TaskPath '\' -TaskName $TaskName -ErrorAction SilentlyContinue))) {
        $Command = "Import-Module -Name 'EC2General'; Update-EC2General"
        $Action = New-ScheduledTaskAction -Execute "$PSHOME\powershell.exe" -Argument "-NoProfile -WindowStyle Hidden -ExecutionPolicy ByPass -Command & {$Command}"
        $Trigger = New-ScheduledTaskTrigger -Once -At (Get-Date).AddDays(1).Date -RepetitionInterval (New-TimeSpan -Hours 1) -RepetitionDuration ([System.TimeSpan]::MaxValue)
        $Settings = New-ScheduledTaskSettingsSet -RunOnlyIfNetworkAvailable -WakeToRun
        $Principal = New-ScheduledTaskPrincipal -UserID 'NT AUTHORITY\SYSTEM' -LogonType S4U -RunLevel Highest
        [System.Void](Register-ScheduledTask -TaskName $TaskName -Action $Action -Trigger $Trigger -Settings $Settings -Principal $Principal)
    }
}

Neat stuff, right? What this code does is this. It begins by determining whether or not the user that’s importing the module is a local administrator, or not. They’re going to need to be, to register the scheduled task. If they are, and the task doesn’t already exist, it creates all the necessary variables to get the task created. When those are available, it’ll register the scheduled task.

I do want to credit Chrissy LeMaire, as a post of hers was used while I wrote the code necessary to create a scheduled task. There’s so much that goes into those, that I wasn’t going to trust myself to remember, or require myself to read the help files. I was confident she could be trusted.

This makes me wonder, though, what else should I do when a module loads?

Update: For our project, we didn’t actually use ScriptsToProcess. We did, however, manually run the code to create the task separately. It was that whole a-user-would-need-to-be-an-admin-problem, as most of my users weren’t admins. Keep that requirement in mind.

AWS Write-S3Object Folder Creation Problem

If you were wondering what I occasionally thought about over the holiday break, while I was scrubbing my pool tiles—ugh, it was AWS and the Get- and Write- S3Object cmdlets. Why wasn’t I able to get them to do what I wanted!? Let me first explain my problem, and then my solution.

When you create a folder (and yes, I realize it’s not really a folder), in the AWS Management Console, you do so by clicking the “+ Create folder” button. This guy.

Simple stuff. In my case, you choose the AES-256 encryption setting and then give the folder a name. This folder, because I created it in this manner, is returned by the Get-S3Object cmdlet. So, let’s say we walked through this manual procedure and created a folder called S3ConsoleFolder. Here’s what Get-S3Object would return, providing the S3 bucket, we’re calling tst-bucket, had been empty from the start.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/

Because it could prove helpful, let’s say we also manually created a nested folder inside of our S3ConsoleFolder called S3CFA, as in the A folder of the S3ConsoleFolder. Consider it done. Here’s the new results of the Get-S3Object command. As you’ll see now, we return both the top-level folder, and the newly created, nested folder.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/

Now, let’s get one step closer to using Write-S3Object. Its purpose, according to its maker, Amazon, is that it “uploads one or more files from the local file system to an S3 bucket.” That, it does. Below I’ve indicated the top-level folder, the nested folders, and the files that we’ll upload. The complete path to this folder is C:\Users\tommymaynard\Desktop\TestFolder.

TestFolder
|__ FolderA
|	|__ TestFile2.txt
|	|__ TestFile3.txt
|__ FolderB
|	|__ TestFile4.txt
|	|__ TestFile5.txt
|	|__ TestFile6.txt
|	|__ FolderC
|		|__ TestFile7.txt
|__ TestFile1.txt

The below Write-S3Object command’s purpose is to get everything in the above file structure uploaded to AWS S3. Additionally, it creates the folders we need: TestFolder, FolderA, FolderB, and FolderC. I’m using a parameter hash table below to decrease the length of my command, so it’s easier to read. It’s in no way a requirement.

$Params = @{
    BucketName = 'tst-bucket'
    Folder = 'C:\Users\tommymaynard\Desktop\TestFolder'
    KeyPrefix = (Split-Path -Path 'C:\Users\tommymaynard\Desktop\TestFolder' -Leaf).TrimEnd('\')
    Recurse = $true
    Region = 'us-gov-west-1'
    ServerSideEncryption = 'AES256'
}

With the parameter hash table created, we’ll splat it on the Write-S3Object cmdlet. When completed, we’ll run the Get version of the cmdlet again, to see what was done.

Write-S3Object @Params
(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/FolderA/TestFile2.txt
TestFolder/FolderA/TestFile3.txt
TestFolder/FolderB/FolderC/TestFile7.txt
TestFolder/FolderB/TestFile4.txt
TestFolder/FolderB/TestFile5.txt
TestFolder/FolderB/TestFile6.txt
TestFolder/TestFile1.tst

Now, look at the above results and tell me what’s wrong. I’ll wait…

… Can you see it? What don’t we have included in those results, that we did when we created our folders in the AWS Management Console?

Maybe I was asking for too much, but I expected to have my folders returned on their own lines just like we do for S3ConsoleFolder/ and S3ConsoleFolder/S3CFA/. Remember, I’m lying down on my pool deck, scrubbing the pool tiles (it’s Winter, yes, but I live in southern Arizona), and I cannot for the life of me wrap my head around why I’m not seeing those folders on. their. own. lines. I expected to see these lines within my results:

TestFolder/
TestFolder/FolderA/
TestFolder/FolderB/
TestFolder/FolderB/FolderC/

Remember, I can create folders in the AWS Management Console and it works perfectly, but not with Write-S3Object. Well, not with Write-S3Object the way I was using it. I finally had an idea worth trying: I needed to use Write-S3Object to create the folders first, and then upload the files into the folders. It’s obnoxious. While that required more calls to Write-S3Object, I was okay with it, if it could get me the results I wanted. And ultimately, get my users the results I wanted them to have.

So let’s dump my TestFolder from S3 and start over. We’re here again.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/

We’ll start by creating our parameter hash table. After that, we’ll begin to use the $Param variable and its properties (its keys) to supply parameter values to parameter names. I don’t actually splat the entire hash table in this code section. Although the next three code sections go together, I’ve broken them up, so I can better explain them. This first section creates a $Path variable, an aforementioned parameter hash table (partially based on the $Path variable), and an If statement. The If statement works this way: If my S3 bucket doesn’t already include a folder called TestFolder/, then create it.

$Path = 'C:\Users\tommymaynard\Desktop\TestFolder'
$Params = @{
    BucketName = 'tst-bucket'
    Folder = (Split-Path -Path $Path -Leaf).TrimEnd('\')
    KeyPrefix = (Split-Path -Path $Path -Leaf).TrimEnd('\')
    Recurse = $true
    Region = 'us-gov-west-1'
    ServerSideEncryption = 'AES256'
}

# Create top-level folder (if necessary).
If ((Get-S3Object -BucketName $Params.BucketName -Region $Params.Region).Key -notcontains "$($Params.Folder)/") {
    Write-S3Object -BucketName $Params.BucketName -Region $Params.Region -Key "$($Params.Folder)/" -Content $Params.Folder -ServerSideEncryption AES256
}

Now, Get-S3Object returns my newly created, top-level folder. It was working so far.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/

This second section gets all of the directory’s names from my path. If there are duplicates, and there were, they’re removed by Select-Object‘s Unique parameter. Once I know these, I can start creating my nested folders after cleaning them up a little: splitting the path, replacing backslashes with forward slashes, and removing any forward slashes from the beginning of the path. With each of those, we’ll make sure the cleaned-up path doesn’t include two forward slashes (this would indicate it’s the top-level folder again [as TestFolder//]), and that it doesn’t already exist.

# Create nested level folder(s) (if necessary).
$NestedPaths = (Get-ChildItem -Path $Path -Recurse).DirectoryName | Select-Object -Unique
Foreach ($NestedPath in $NestedPaths) {

    $CleanNestedPath = "$(($NestedPath -split "$(Split-Path -Path $Path -Leaf)")[-1].Replace('\','/').TrimStart('/'))"

    If (("$($Params.Folder)/$CleanNestedPath/" -notmatch '//') -and ((Get-S3Object -BucketName $Params.BucketName -Region $Params.Region).Key -notcontains "$($Params.Folder)/$CleanNestedPath/")) {
        Write-S3Object -BucketName $Params.BucketName -Region $Params.Region -Key "$($Params.Folder)/$CleanNestedPath/" -Content $CleanNestedPath -ServerSideEncryption AES256
    }
}

And, now Get-S3Object returns my nested folders, too. Still working.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/
TestFolder/FolderA/
TestFolder/FolderB/
TestFolder/FolderB/FolderC/

This last section only serves to upload the files from the EC2 instance to the S3 bucket and into the folders we’ve created. Like the other code did in the last two sections, this doesn’t check that files already exist. It’ll happily write, right over them without warning. I didn’t need this protection, so I didn’t include it.

Write-S3Object -BucketName $Params.BucketName -Region $Params.Region -Folder $Path -KeyPrefix "$($Params.Folder)/" -ServerSideEncryption AES256 -Recurse

Now that my folders are created and the files are uploaded, I get the results I expect. I can see all the folders on their own lines, as well as that of the files.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/
TestFolder/FolderA/
TestFolder/FolderA/TestFile2.txt
TestFolder/FolderA/TestFile3.txt
TestFolder/FolderB/
TestFolder/FolderB/FolderC/
TestFolder/FolderB/FolderC/TestFile7.txt
TestFolder/FolderB/TestFile4.txt
TestFolder/FolderB/TestFile5.txt
TestFolder/FolderB/TestFile6.txt
TestFolder/TestFile1.txt

I do hope I didn’t overlook an easier way to do this, but as history has proved, it’s quite possible. Here’s to hoping this can help someone else. I felt pretty lost and confused until I figured it out. It seems to me that AWS needs to iron this one out for us. No matter how it’s used, Write-S3Object should create “folders” in such a way that they are consistently returned by Get-S3Object. That’s whether they’re created before files are uploaded (my fix), or simply created as files are uploaded.

And, cue the person to tell me the easier way.

Comfortably Save a Pester Test

I’ve spent some time with Pester this week. While exploring the OutputFile parameter, it quickly became clear that the best I seemed to be able to output was some form of XML, and honestly, that wasn’t good enough for me in my moment of discovery. While I intend to make myself a Pester expert in the coming year (2018), there’s somethings I don’t know 100% yet, and so I understand it’s possible that I sound like an idiot, as I may not know about x in regard to Pester.

While working with Invoke-Pester, I came across the -PassThru parameter. Its purpose in life, in relation to the Invoke-Pester command, is to create a PSCustomObject. Now that’s, something I can work with. The idea here is to invoke Pester against an AWS instance, export my object (to an XML standard I can deal with [thank you Export-Clixml]), and write it to S3. Then, I can download the file to a computer that’s not my newly configured AWS instance, and check for what tests passed and failed. This, without the need to RDP (Remote Desktop), to the instance, and visually and manually check to see what did and didn’t work from there. We’re getting closer and closer to RDP being a security incident.

My example is not going to include an actual Pester run, so instead we’ll jump directly to the Invoke-Pester command and what we’ll do after that command has executed. This first example is a fairly standard way of invoking Pester (with parameters) and creating an output file. Again, this isn’t the output I’m after.

PS > Invoke-Pester -Script @{Path = 'C:\WkDir\HostAcceptance.Tests.ps1'; Parameters = @{Project = 'trailking'; Environment = 'tst'}} -OutputFile 'C:\WkDir\PesterOutputXml.xml'

Instead we’re going to include some other parameters, to include PassThru, Show, and OutVariable. PassThru will provide us an object that contains all of the Pester results, Show with the None value will hide the Pester test results, and OutVariable will get that object (that again, contains all of the Pester results) into the PesterResults variable.

PS > Invoke-Pester -Script @{Path = 'C:\WkDir\HostAcceptance.Tests.ps1'; Parameters = @{Project = 'trailking'; Environment = 'tst'}} -PassThru -Show None -OutVariable PesterResults

I was mainly okay with the above command; however, it was still writing my object to the host program, and I wanted nothing to be displayed, at all. It’s too bad that option isn’t in there by default, but I’m okay with improvisation. Again, I know PowerShell better than Pester, so there is the possibility I just don’t know enough at the immediate moment to not have done this a potentially better way. Anyway, the below example removes all the output to the host, but still stuffs my results into the PesterResults variable.

PS > [System.Void](Invoke-Pester -Script @{Path = 'C:\WkDir\HostAcceptance.Tests.ps1'; Parameters = @{Project = 'trailking'; Environment = 'tst'}} -PassThru -Show None -OutVariable PesterResults)

Okay, now what? Everything I need and more is now in the $PesterResults variable. Next, we’ll export it into an XML format I (as in Import-Clixml), can deal with.

PS > $PesterResults | Export-Clixml -Path 'C:\WkDir\PesterExport.xml'

Now that it’s in a usable format, I’m going to read it back in. You can go ahead and pretend that I’ve moved my exported XML file from the computer on which it was created, and now I’m about to read it in on a different computer. This visualization is as though it was uploaded to AWS S3, and downloaded from S3 on a different device. We’ll say it’s on my Desktop.

$PesterResults = Import-Clixml -Path 'C:\Users\tommymaynard\Desktop\PesterStuff\PesterExport.xml'
Foreach ($Result in $PesterResults.TestResult) {
    [System.Array]$Object += [PSCustomObject]@{
        Describe = $Result.Describe.TrimEnd(':')
        Context = $Result.Context.TrimEnd(':')
        It = $Result.Name.TrimEnd(':')
        Should = $Result.Result
        Time = $Result.Time
    }
} 

$Object
# $Object | Format-Table -Autosize
# $Object | Export-Csv -Path 'C:\Users\tommymaynard\Desktop\PesterStuff\PesterExport.xml' -NoTypeInformation

In the above example, we do a few things. We import the file we created out in AWS into a variable on my local computer, and then begin to iterate through its contents. For each entry, we add other object to my Object variable. Each object will contain a Describe, Context, It, Should, and Time property which is all obtained from the TestResult property of my PesterResults variable.

The last three lines are a few ways to handle the output (2 and 3 are commented out): (1) display it in the host, (2) display it in the host in an easier to read format, and (3) write the objects to CSV. Neat, right. As I continue to better learn Pester… I might just be back here. I can’t be 100% that this is the best way to save off and deal with the results of the Pester tests, but we’ll see!

Technical Fact at PowerShell Launch

When I read books, or websites, and find worthy facts, I aim to try and keep them. If I read a book, my bookmark is often a few pieces of paper stapled together with info and page numbers from the book. Well, I’m scrapping that technique, which went hand-in-hand with folding down page corners. I’m also ditching the small pieces of paper that litter my desk with random bits of information: “PowerShell objects come from classes,” and “LastLogonDate is the converted version of LastLogonTimeStamp.” Now, it’s all slated to go in a single file called InfoLines.txt in Dropbox, here: C:\Users\tommymaynard\Dropbox\PowerShell\Profile.

If you’re wondering why I use a Dropbox folder it’s because I want this file and its worthy facts to be available regardless of whether I’m on my work, or home computer. You can read more in a post I wrote that uses Dropbox to sync my profile script between work and home, here: http://tommymaynard.com/sync-profile-script-from-work-to-home-2017. It may help make what I’m doing here make more sense.

For now, because I just started this today, I only have a few lines of worthy information. Here’s the contents of my InfoLines.txt file so far. If you can’t tell, I’m finishing up Amazon Web Services in Action. I only have 70 more pages to go!

The auto-scaling group is responsible for connecting a newly launched EC2 instance with the load balancer (ELB). (AWS in Action, p.315)
DevOps is an approach driven by software development to bring development and operations closer together. (AWS in Action, p.93)
Auto-scaling is a part of the EC2 service and helps you to ensure that a specified number of virtual servers are running. (AWS in Action, p.294)

Each time I open the ConsoleHost, the ISE, or Visual Studio Code, I want a random line from the file to be shared with me. The below code will return a single, random line with asterisks both above and below it. This is in order to help separate it from the (totally unnecessary, unwelcome, and shouldn’t even be there) message that tells me how long my “personal and system profiles” took to load, and my prompt. They need to put that message in a variable and not on my screen without permission. Don’t tell us what you think we need to know, guys.

'**************'
Get-Content -Path "$env:USERPROFILE\Dropbox\PowerShell\Profile\InfoLines.txt" | Get-Random
'**************'

That’s it. Now, whenever I open one of these PowerShell hosts, I’ll get a quick reminder about something I found important, and want to keep fresh in my mind.

Update: I decided I wanted the asterisks above and below my technical fact to go from one end of the PowerShell host to other. Here’s how I did that.

'*' * ($Host.UI.RawUI.BufferSize.Width - 1)
Get-Content -Path "$env:USERPROFILE\Dropbox\PowerShell\Profile\InfoLines.txt" | Get-Random
'*' * ($Host.UI.RawUI.BufferSize.Width - 1)

AWS EC2 Instance CSV File: Let’s Have it, Amazon

A while back I found something that AWS began offering that made the below paragraph meaningless. I’ll be back with more soon. I know I said that before, but I’ll try extra hard this time…

I’ve wanted it for some time now… an always up-to-date CSV file that I can programmatically download—using PowerShell, duh—directly from AWS. But not just any CSV. I need a continually updated CSV that includes all the possible information on every AWS EC2 instance type. It can even include the previous generation instances, providing there’s a column that indicates whether it’s current or not. It would probably contain information I’ll never need, or want, and I’d still want the file, as I can easily filter against the contents of the document. I’d love to write the website post that shows how to do that if we can get this file to fruition.

 

AWS UserData Multiple Run Framework

In AWS, we can utilize the UserData section in EC2 to run PowerShell against our EC2 instances at launch. I’ve said it before; I love this option. As someone that speaks PowerShell with what likely amounts to first language fluency, there’s so much I do to automate my machine builds with CloudFormation, UserData, and PowerShell.

I’ve begun to have a need to do various pieces of automation at different times. This is to say I need to have multiple instance restarts, as an instance is coming online, in order to separate different pieces of configuration and installation. You’ll figure out when you need that, too. And, when you do, you can use what I’ve dubbed the “multiple run framework for AWS.” But really, you call it what you want. That hardly matters.

We have to remember, that by default, UserData only runs once. It’s when the EC2 instance launches for the first time. In the below example, we’re going to do three restarts and four separate code runs.

Our UserData section first needs to add a function to memory. I’ve called it Set-SystemForNextRun and its purpose is to (1) create what I call a “passfile” to help indicate where we are in the automation process, (2) enable UserData to run the next time the service is restarted (this happens at instance restart, obviously), and (3) restart the EC2 instance. Let’s have a look. Its three parameters and three If statements; simple stuff.

Function Set-SystemForNextRun {
    Param (
        [string]$PassFile,
        [switch]$UserData,
        [switch]$Restart
    )
    If ($PassFile) {
        [System.Void](New-Item -Path "$env:SystemDrive\passfile$PassFile.txt" -ItemType File)
    }
    If ($UserData) {
        $Path = "$env:ProgramFiles\Amazon\Ec2ConfigService\Settings\config.xml"
        [xml]$ConfigXml = Get-Content -Path $Path
        ($ConfigXml.Ec2ConfigurationSettings.Plugins.Plugin |
            Where-Object -Property Name -eq 'Ec2HandleUserData').State = 'Enabled'
        $ConfigXml.Save($Path)
    }
    If ($Restart) {
        Restart-Computer -Force
    }
}

The above function accepts three parameters: PassFile, UserData, and Restart. PassFile accepts a string value. You’ll see how this works in the upcoming If-ElseIf example. UserData and Restart are switch parameters. If they’re included when the function is invoked, they’re True ($true), and if they’re not included, they’re False ($false).

Each of the three parameters has its own If statement within the Set-SystemforNextRun function. If PassFile is included, it creates a text file called C:\passfile<ValuePassedIn>.txt. If UserData is included, it resets UserData to enabled (it effectively, checks the check box in the Ec2Config GUI). If Restart is included, it restarts the instance, right then and there.

Now let’s take a look at the If-ElseIf statement that completes four code runs and three restarts. We’ll discuss it further below, but before we do, a little reminder. Our CloudFormation UserData PowerShell is going to contain the above Set-SystemForNextRun function, and something like you’ll see below after you’ve edited it for your needs.

If (-Not(Test-Path -Path "$env:SystemDrive\passfile1.txt")) {

    # Place code here (1).

    # Invoke Set-SystemForNextRun function.
    Set-SystemForNextRun -PassFile '1' -UserData -Restart

} ElseIf (-Not(Test-Path -Path "$env:SystemDrive\passfile2.txt")) {

    # Place code here (2).

    # Invoke Set-SystemForNextRun function.
    Set-SystemForNextRun -PassFile '2' -UserData -Restart

} ElseIf (-Not(Test-Path -Path "$env:SystemDrive\passfile3.txt")) {

    # Place code here (3).

    # Invoke Set-SystemForNextRun function.
    Set-SystemForNextRun -PassFile '3' -UserData -Restart

} ElseIf (-Not(Test-Path -Path "$env:SystemDrive\passfile4.txt")) {

    # Place code here (4).

    # Invoke Set-SystemForNextRun function.
    Set-SystemForNextRun -PassFile '4'

}

In line 1, we test whether or not the file C:\passfile1.txt exists. If it doesn’t exist, we run the code in the If portion. This will run whatever PowerShell we add to that section. Then it’ll pass 1 to the Set-SystemForNextRun function to have C:\passfile01.txt created. Additionally, because the UserData and Restart parameters are included, it’ll reset UserData to enabled, and restart the EC2 instance. Because the C:\passfile1.txt file now exists, the next time the UserData runs, it’ll skip the If portion and evaluate the first ElseIf statement.

This ElseIf statement determines whether or not the C:\passfile2.txt file exists, or not. If it doesn’t, and it won’t after the first restart, then the code in this ElseIf will run. When it’s done, it’ll create the passfile2.txt file, reset UserData, and restart the instance. It’ll do this for the second ElseIf (third code run), and the final ElseIf (fourth code run), as well. Notice that the final invocation of the Set-SystemForNextRun function doesn’t enable UserData or Restart the instance. Be sure to add those if you need either completed after the final ElseIf completes.

And that’s it. At this point in time, I always use my Set-SystemForNextRun function and a properly written If-ElseIf statement to separate the configuration and installation around the necessary amount of instance restarts. In closing, keep in mind that deleting those pass files from the root of the C:\ drive is not something you’ll likely want to do. In time, I may do a rewrite that stores entries in the Registry perhaps, so there’s less probability that one of these files might be removed by someone.

Either way, I hope this is helpful for someone! If you’re in this space—AWS, CloudFormation, UserData, and PowerShell—then chances are good that at some point you’re going to want to restart an instance, and then continue to configure it.

Read II

PowerShell Code and AWS CloudFormation UserData

Note: This post was written well over a month ago, but was never posted, due to some issues I was seeing in AWS GovCloud. It works 100% of the time now, in both GovCloud and non-GovCloud AWS. That said, if you’re using Read-S3Object in GovCloud, you’re going to need to include the Region parameter name and value.

As I spend more and more time with AWS, I end up back at PowerShell. If I haven’t said it yet, thank you Amazon Web Services, for writing us a PowerShell module.

In the last month, or two, I’ve been getting into the CloudFormation template business. I love the whole UserData option we have—injecting PowerShell code into an EC2 instance during its initialization, and love, that while we can do it in the AWS Management Console, we can do it with CloudFormation (CFN) too. In the last few months, I’ve decided to do things a bit differently. Instead of dropping large amounts of PowerShell code inside my UserData property in the CFN template, I decided to use Read-S3Object to copy PowerShell modules to EC2 instances, and then just issue calls to the functions in the remainder of the CFN UserData. In one instance, I went from 200+ lines of PowerShell in the CFN template to just a few.

To test, I needed to verify if I could get a module folder and file into the proper place on the instance and be able to use the module’s function(s) immediately, without any need to end one PowerShell session and start a new one. I suspected this would work just fine, but it needed to be seen.

Here’s how the testing went: On my Desktop, I have a folder called MyModule. Inside the folder, I have a file called MyModule.psm1. If you haven’t seen it before, this file extension indicates the file is a PowerShell module file. The contents of the file, are as follows:

Function Get-A {
    'A'
}

Function Get-B {
    'B'
}

Function Get-C {
    'C'
}

The file contents indicate that the module contains three functions: Get-A, Get-B, and Get-C. In the next example, we can see that the Desktop folder isn’t a place where a module file and folder can exist, where we can expect that the modules will be automatically loaded into the PowerShell session. PowerShell isn’t aware of this module on its own, as can be seen below.

Get-A
Get-A : The term 'Get-A' is not recognized as the name of a cmdlet, function, script file, or operable program. Check
the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ Get-A
+ ~~~~~
    + CategoryInfo          : ObjectNotFound: (Get-A:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException
Get-B
Get-B : The term 'Get-B' is not recognized as the name of a cmdlet, function, script file, or operable program. Check
the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ Get-B
+ ~~~~~
    + CategoryInfo          : ObjectNotFound: (Get-B:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException
Get-C
Get-C : The term 'Get-C' is not recognized as the name of a cmdlet, function, script file, or operable program. Check
the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ Get-C
+ ~~~~~
    + CategoryInfo          : ObjectNotFound: (Get-C:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

While I could tell PowerShell to look on my desktop, what I wanted to do is have my CFN template copy the module folder out of S3 and place it on the instances, in a preferred and proper location: C:\Program Files\WindowsPowerShell\Modules. This is a location that PowerShell checks for modules automatically, and loads them the moment a contained function, or cmdlet, from the module, is requested. My example uses a different path, but PowerShell will check here automatically, as well. As a part of this testing, we’re pretending that the movement from my desktop is close enough to the movement from S3 to an EC2 instance. I’ll obviously test this more with AWS.

Move-Item -Path .\Desktop\MyModule\ -Destination C:\Users\tommymaynard\Documents\WindowsPowerShell\Modules\
Get-A
A
PS > Get-B
B
Get-C
C

Without the need to open a new PowerShell session, I absolutely could use the functions in my module, the moment the module was moved from the Desktop into a folder PowerShell looks at by default. Speaking of those locations, you can view them by returning the value in the $env:PSModulePath environmental variable. Use $env:PSModulePath -split ';' to make it easier to read.

Well, it looks like I was right. I can simply drop those module folders on the EC2 instance, into C:\Program Files\WindowsPowerShell\Modules, just before they’re used with no need for anything more than the current PowerShell session that’s moving them into place.

Update: After this on-my-own-computer test, I took it to AWS. It works, and now it’s the only way I use my CFN template UserData. I write my function(s), house them in a PowerShell module(s), copy them to S3, and finally use my CFN UserData to copy them to the EC2 instance. When that’s complete, I can call the contained function(s) without any hesitation, or additional work. It wasn’t necessary, but I added sleep commands between the function invocations. Here’s a quick, modified example you might find in the UserData of one of my CloudFormation templates.

      UserData:
        Fn::Base64:
          !Sub |
          <powershell>
            # Download PowerShell Modules from S3.
            $Params = @{
              BucketName = 'windows'
              Keyprefix = 'WindowsPowerShell/Modules/ProjectVII/'
              Folder = "$env:ProgramFiles\WindowsPowerShell\Modules"
            }
            Read-S3Object @Params | Out-Null

            # Invoke function(s).
            Set-TimeZone -Verbose -Log
            Start-Sleep -Seconds 15

            Add-EncryptionType -Verbose -Log
            Start-Sleep -Seconds 15

            Install-ProjectVII -Verbose -Log
            Start-Sleep -Seconds 15
            </powershell>

Silent Install from an ISO

In the last several weeks, I’ve been having a great time writing PowerShell functions and modules for new projects moving to Amazon Web Services (AWS). I’m thrilled with the inclusion of UserData as a part of provisioning an EC2 instance. Having developed my PowerShell skills, I’ve been able to leverage them in conjunction with UserData to do all sorts of things to my instances. I’m reaching into S3 for installers, expanding archive files, creating folders, bringing down custom written modules in UserData and invoking the contained functions from them there, too. I’m even setting the timezone. It’s seems so straight forward sure, but getting automation and logging wrapped around that need, is rewarding.

As a part of an automated SQL installation — yes, the vendor told me they don’t support AWS RDS — I had a new challenge. It wasn’t overly involved by any means, but it’s worthy of sharing, especially if someone hits this post in a time of need, and gets a problem solved. I’ve said it a millions times: I often write, so I have a place to put things I may forget, but truly, it’s about anyone else I can help, as well. I’ve been at that almost three years now.

Back to Microsoft SQL: It’s on an ISO. I’ve been pulling down Zip files for weeks, in various projects, with CloudFormation, and expanding them, but this was a new one. I needed to get at the files in that ISO to silently run an installation. Enter the Mount-DiskImage function from Microsoft’s Storage module. Its help synopsis says this: “Mounts a previously created disk image (virtual hard disk or ISO), making it appear as a normal disk.” The command to pull just that help information is listed below.

PS > (Get-Help -Name Mount-DiskImage).Synopsis

As I typically do, I started working with the function in order to learn how to use it. It works as described. Here’s the command I used to mount my ISO.

PS > Mount-DiskImage -ImagePath 'C:\Users\tommymaynard\Desktop\SQL2014.ISO'

The above example doesn’t produce any output by default, and I rather like it that way. After a dismount — it’s the same above command with Dismount-DiskImage instead of Mount-DiskImage — I tried it with the -PassThru parameter. This parameter returns an object with some relevant information.

PS > Mount-DiskImage -ImagePath 'C:\Users\tommymaynard\Desktop\SQL2014.ISO' -PassThru

Attached          : False
BlockSize         : 0
DevicePath        :
FileSize          : 2606895104
ImagePath         : C:\Users\tommymaynard\Desktop\SQL2014.ISO
LogicalSectorSize : 2048
Number            :
Size              : 2606895104
StorageType       : 1
PSComputerName    :

The first thing I noticed about this output is that it didn’t provide the drive letter used to mount the ISO. I was going to need that drive letter in PowerShell, in order to move to that location and run the installer. Even if I didn’t move to that location, I needed the drive letter to create a full path. The drive letter was vital, and this, is why we’re here today.

Update: See the below post replies where Get-Volume is used to discover the drive letter.

Although the warmup here seemed to take a bit, we’re almost done here for today. I’ll drop the code below, and we’ll do a quick, line-by-line walk through.

# Mount SQL ISO and run setup.exe.
PS > $DrivesBeforeMount = (Get-PSDrive).Name
PS >
PS > Mount-DiskImage -ImagePath 'C:\Users\tommymaynard\Desktop\SQL2014.ISO'
PS >
PS > $DrivesAfterMount = (Get-PSDrive).Name
PS >
PS > $DriveLetterUsed = (Compare-Object -ReferenceObject $DrivesBeforeMount -DifferenceObject $DrivesAfterMount).InputObject
PS >
PS > Set-Location -Path "$DriveLetterUsed`:\"

Line 2: The first command in this series, stores the name property of all the drives in our current PowerShell session in a variable named $DrivesBeforeMount. That name should offer some clues.

Line 4: This line should look familiar; it mounts our SQL 2014 ISO (to a mystery drive letter).

Line 6: Here, we run the same command as in Line 2, however, we store the results in $DrivesAfterMount. Do you see what we’re up to yet?

Line 8:  This command compares our two recently created Drive* variables. We want to know which drive is there now, that wasn’t when the first Get-PSDrive command was run.

Line 10: And finally, now that we know the drive letter used for our newly mounted ISO, we can move there in order to access the setup.exe file.

Okay, that’s it for tonight. Now back to working on a silent SQL install on my EC2 instance.

Get the AWS Noun Prefixes

One of the nice things about the Get-AWSPowerShellVersion cmdlet is the ListServiceVersionInfo switch parameter. It returns properties for the Service (as in the AWS Service offering name), the Noun Prefix, and API Version. Yes, there really are spaces in those last two property names; however, they’ve been fixed in my function. I had been hoping for an easier way to determine the prefixes used in the AWS cmdlets, and here we have it. I actually considered parsing cmdlet names myself, so a huge thanks to AWS, for making sure that wasn’t necessary.

It’s almost as though this should have been its own cmdlet—a get the version cmdlet and a get the noun prefixes cmdlet. Therefore, I’ve wrapped this command and its parameter in a quick and easy-to-use function for my user base. Copy, paste, and try it out; it’s all yours. Super simple.

Function Get-AWSNounPrefix {
<#
.SYNOPSIS
    The function returns the AWS PowerShell cmdlet noun prefixes, along with the corresponding AWS Service.

.DESCRIPTION
    The function returns the AWS PowerShell cmdlet noun prefixes, along with the corresponding AWS Service name. This function utilizes the AWSPowerShell module's Get-AWSPowerShellVersion function.

.EXAMPLE
    -------------------------- EXAMPLE 1 --------------------------
    PS > Get-AWSNounPrefix
    This examples returns all the noun prefixes and their corresponding AWS Service name.

.EXAMPLE
    -------------------------- EXAMPLE 2 --------------------------
    PS > Get-AWSNounPrefix | Where-Object NounPrefix -match 'cf'
    NounPrefix Service            APIVersion
    ---------- -------            ----------
    CF         Amazon CloudFront  2016-09-29
    CFG        AWS Config         2014-11-12
    CFN        AWS CloudFormation 2010-05-15

    This example uses the Where-Object cmdlet and the Match operator to filter the results by the NounPrefix.

.EXAMPLE
    -------------------------- EXAMPLE 3 --------------------------
    PS > Get-AWSNounPrefix | Where-Object Service -like '*formation*'
    NounPrefix Service            APIVersion
    ---------- -------            ----------
    CFN        AWS CloudFormation 2010-05-15

    This example uses the Where-Object cmdlet and the Like operator to filter the results by the service name.

.EXAMPLE
    -------------------------- EXAMPLE 4 --------------------------
    PS > Get-AWSNounPrefix | Where-Object -Property APIVersion -like '2016*' | Sort-Object -Property APIVersion -Descending
    NounPrefix Service                          APIVersion
    ---------- -------                          ----------
    SMS        Amazon Server Migration Service  2016-10-24
    BGT        AWS Budgets                      2016-10-20
    CF         Amazon CloudFront                2016-09-29
    EC2        Amazon Elastic Compute Cloud     2016-09-15
    SNOW       AWS Import/Export Snowball       2016-06-30
    CGIP       Amazon Cognito Identity Provider 2016-04-18
    INS        Amazon Inspector                 2016-02-16
    AAS        Application Auto Scaling         2016-02-06
    MM         AWS Marketplace Metering         2016-01-14
    DMS        AWS Database Migration Service   2016-01-01

    This example uses the Where-Object, and Sort-Object cmdlet, to find services updated in 2016 and sorts by the most recently added and updated.

.NOTES
    Name: Get-AWSNounPrefix
    Author: Tommy Maynard
    Comments: Current version of AWSPowerShell module at the first, last edit: 3.3.20.0.
    Last Edit: 11/18/2016
    Version 1.0
#>
    [CmdletBinding()]
    Param (
    )

    Begin {
        $AWSServices = Get-AWSPowerShellVersion -ListServiceVersionInfo | Sort-Object -Property 'Noun Prefix'
    } # End Begin.

    Process {
        Foreach ($AWSService in $AWSServices) {
            [PSCustomObject]@{
                NounPrefix = $AWSService.'Noun Prefix'
                Service = $AWSService.Service
                APIVersion = $AWSService.'API Version'
            }
        } # End Foreach.
    } # End Process.

    End {
    } # End End.
} # End Function: Get-AWSNounPrefix.