AWS UserData Multiple Run Framework

In AWS, we can utilize the UserData section in EC2 to run PowerShell against our EC2 instances at launch. I’ve said it before; I love this option. As someone that speaks PowerShell with what likely amounts to first language fluency, there’s so much I do to automate my machine builds with CloudFormation, UserData, and PowerShell.

I’ve begun to have a need to do various pieces of automation at different times. This is to say I need to have multiple instance restarts, as an instance is coming online, in order to separate different pieces of configuration and installation. You’ll figure out when you need that, too. And, when you do, you can use what I’ve dubbed the “multiple run framework for AWS.” But really, you call it what you want. That hardly matters.

We have to remember, that by default, UserData only runs once. It’s when the EC2 instance launches for the first time. In the below example, we’re going to do three restarts and four separate code runs.

Our UserData section first needs to add a function to memory. I’ve called it Set-SystemForNextRun and its purpose is to (1) create what I call a “passfile” to help indicate where we are in the automation process, (2) enable UserData to run the next time the service is restarted (this happens at instance restart, obviously), and (3) restart the EC2 instance. Let’s have a look. Its three parameters and three If statements; simple stuff.

Function Set-SystemForNextRun {
    Param (
        [string]$PassFile,
        [switch]$UserData,
        [switch]$Restart
    )
    If ($PassFile) {
        [System.Void](New-Item -Path "$env:SystemDrive\passfile$PassFile.txt" -ItemType File)
    }
    If ($UserData) {
        $Path = "$env:ProgramFiles\Amazon\Ec2ConfigService\Settings\config.xml"
        [xml]$ConfigXml = Get-Content -Path $Path
        ($ConfigXml.Ec2ConfigurationSettings.Plugins.Plugin |
            Where-Object -Property Name -eq 'Ec2HandleUserData').State = 'Enabled'
        $ConfigXml.Save($Path)
    }
    If ($Restart) {
        Restart-Computer -Force
    }
}

The above function accepts three parameters: PassFile, UserData, and Restart. PassFile accepts a string value. You’ll see how this works in the upcoming If-ElseIf example. UserData and Restart are switch parameters. If they’re included when the function is invoked, they’re True ($true), and if they’re not included, they’re False ($false).

Each of the three parameters has its own If statement within the Set-SystemforNextRun function. If PassFile is included, it creates a text file called C:\passfile<ValuePassedIn>.txt. If UserData is included, it resets UserData to enabled (it effectively, checks the check box in the Ec2Config GUI). If Restart is included, it restarts the instance, right then and there.

Now let’s take a look at the If-ElseIf statement that completes four code runs and three restarts. We’ll discuss it further below, but before we do, a little reminder. Our CloudFormation UserData PowerShell is going to contain the above Set-SystemForNextRun function, and something like you’ll see below after you’ve edited it for your needs.

If (-Not(Test-Path -Path "$env:SystemDrive\passfile1.txt")) {

    # Place code here (1).

    # Invoke Set-SystemForNextRun function.
    Set-SystemForNextRun -PassFile '1' -UserData -Restart

} ElseIf (-Not(Test-Path -Path "$env:SystemDrive\passfile2.txt")) {

    # Place code here (2).

    # Invoke Set-SystemForNextRun function.
    Set-SystemForNextRun -PassFile '2' -UserData -Restart

} ElseIf (-Not(Test-Path -Path "$env:SystemDrive\passfile3.txt")) {

    # Place code here (3).

    # Invoke Set-SystemForNextRun function.
    Set-SystemForNextRun -PassFile '3' -UserData -Restart

} ElseIf (-Not(Test-Path -Path "$env:SystemDrive\passfile4.txt")) {

    # Place code here (4).

    # Invoke Set-SystemForNextRun function.
    Set-SystemForNextRun -PassFile '4'

}

In line 1, we test whether or not the file C:\passfile1.txt exists. If it doesn’t exist, we run the code in the If portion. This will run whatever PowerShell we add to that section. Then it’ll pass 1 to the Set-SystemForNextRun function to have C:\passfile01.txt created. Additionally, because the UserData and Restart parameters are included, it’ll reset UserData to enabled, and restart the EC2 instance. Because the C:\passfile1.txt file now exists, the next time the UserData runs, it’ll skip the If portion and evaluate the first ElseIf statement.

This ElseIf statement determines whether or not the C:\passfile2.txt file exists, or not. If it doesn’t, and it won’t after the first restart, then the code in this ElseIf will run. When it’s done, it’ll create the passfile2.txt file, reset UserData, and restart the instance. It’ll do this for the second ElseIf (third code run), and the final ElseIf (fourth code run), as well. Notice that the final invocation of the Set-SystemForNextRun function doesn’t enable UserData or Restart the instance. Be sure to add those if you need either completed after the final ElseIf completes.

And that’s it. At this point in time, I always use my Set-SystemForNextRun function and a properly written If-ElseIf statement to separate the configuration and installation around the necessary amount of instance restarts. In closing, keep in mind that deleting those pass files from the root of the C:\ drive is not something you’ll likely want to do. In time, I may do a rewrite that stores entries in the Registry perhaps, so there’s less probability that one of these files might be removed by someone.

Either way, I hope this is helpful for someone! If you’re in this space—AWS, CloudFormation, UserData, and PowerShell—then chances are good that at some point you’re going to want to restart an instance, and then continue to configure it.

Read II

Linux Prompt on Windows – Part V

The last time I wrote about my Linux prompt, we were on post IV. Now it’s V, and all because I’m tired of not knowing if I’m in the debugger or not. The standard PowerShell prompt, when in the debugger, will add [DBG]: to the beginning of the prompt and an extra right angle bracket to the end. Therefore, the standard PowerShell ends up looking like it does toward the bottom of this first example.

PS C:\Users\tommymaynard> Set-PSBreakpoint -Script .\Desktop\NewScript.ps1 -Line 7
ID Script                Line Command               Variable             Action
-- ------                ---- -------               --------             ------
0 NewScript.ps1            7

PS C:\Users\tommymaynard> .\Desktop\NewScript.ps1
[DBG]: PS C:\Users\tommymaynard>> q
PS C:\Users\tommymaynard>

I’ll include my entire prompt at the end of today’s post, but before we do that, let’s focus on the new part. It’s going to add these same two things to the prompt, when I’m debugging a script. If the path, Variable:/PSDebugContext exists, we can safety assume we’re in the debugger. Therefore, when we are, we’ll assign two new variables as $DebugStart and $DebugEnd.

If (Test-Path -Path Variable:/PSDebugContext) {
    $DebugStart = '[DBG]: '
    $DebugEnd = ']'
}

Again, the above If statement is stuffed between a bunch of other PowerShell that makes up the entire prompt. Before we get there, here’s an example of what my prompt looks like now when we are, and aren’t in the debugger.

[tommymaynard@server01 c/~]$ .\Desktop\NewScript.ps1
[DBG]: [tommymaynard@server01 c/~]]$ q
[tommymaynard@server01 c/~]$ 

Excellent! Now I can continue to use my own prompt function, and know when I’m in the debugger. All this, without hitting an error to remind me. In the full prompt below, we also update the WindowTitle to reflect when we’re in the debugger, too.

# Create Linux prompt.
Function Prompt {
    (Get-PSProvider -PSProvider FileSystem).Home = $env:USERPROFILE

    # Determine if Admin and set Symbol variable.
    If ([bool](([System.Security.Principal.WindowsIdentity]::GetCurrent()).Groups -match 'S-1-5-32-544')) {
        $Symbol = '#'
    } Else {
        $Symbol = '$'
    }
	 
    # Write Path to Location Variable as /.../...
    If ($PWD.Path -eq $env:USERPROFILE) {
        $Location = '/~'
    } ElseIf ($PWD.Path -like "*$env:USERPROFILE*") {
        $Location = "/$($PWD.Path -replace ($env:USERPROFILE -replace '\\','\\'),'~' -replace '\\','/')"
    } Else {
        $Location = "$(($PWD.Path -replace '\\','/' -split ':')[-1])"
    }

    # Determine Host for WindowTitle.
    Switch ($Host.Name) {
        'ConsoleHost' {$HostName = 'consolehost'; break}
        'Windows PowerShell ISE Host' {$HostName = 'ise'; break}
        default {}
    }

    # Create and write Prompt; Write WindowTitle.
    $UserComputer = "$($env:USERNAME.ToLower())@$($env:COMPUTERNAME.ToLower())" 
    $Location = "$((Get-Location).Drive.Name.ToLower())$Location"

    # Check if in the debugger.
    If (Test-Path -Path Variable:/PSDebugContext) {
        $DebugStart = '[DBG]: '
        $DebugEnd = ']'
    }

    # Actual prompt and title.
    $Host.UI.RawUI.WindowTitle = "$HostName`: $DebugStart[$UserComputer $Location]$DebugEnd$Symbol"
    "$DebugStart[$UserComputer $Location]$DebugEnd$Symbol "
}

Break From a Nested Loop

I’m building a new version of my “Multi-Level Menu System with a Back Option.” Here’s the URL from the May 2016 post: http://tommymaynard.com/script-sharing-multi-level-menu-system-with-back-option-2016. This is a post where I wrote about creating a text based nested menu system. It was neat, but the nested Switch statements got a little confusing, and so it was never used by me, or potentially anyone else. I have no idea.

What I do know, is that a few times this past weekend I was able to work on a redesign for this menu system. The problem was this: From the main menu, you can press Q to quit. In the nested menus you either choose an option (1 through whatever), or press B to go back a menu. This means that if you’re three menus deep, you have to press B until you’re back at the main menu in order to press Q to quit. You can’t quit from a nested menu. Well, you couldn’t before this weekend, anyway.

We won’t go into the menu system for now, but I do want to leave an example of how to break out of nested loops. I seriously, learned something I’ve yet to ever see, and so maybe this will be a first for you, as well. We’ve all seen, and likely used break. You can read more at about_break using Get-Help: Get-Help -Name about_Break -ShowWindow.

In this first example, we’ll write the string “This is a test.” until we stop the loop’s execution. There’s nothing about this loop that’s ever going to make it stop without our help.

While ($true) {
    'This is a test.'
}

'This is a test.'
'This is a test.'
'This is a test.'
...

In this next example, we’ll immediately break out of the While loop by using the break statement. Prior to breaking out, we’ll write “This is a test.” to the host program, but this time, it’ll only be written once and then the execution will end.

While ($true) {
    'This is a test.'
    break
}
'This is a test.'

Let’s start our next example by nesting a While loop, inside of a While loop. In this example, we’ll write “Outer While loop” once, and then continually write “Nested While loop” until we manually end the execution. We can’t get back to the outer While loop, when we’re forever stuck in the inner While loop.

While ($true) {
    'Outer While loop'

    While ($true) {
        'Nested While loop'
    }
}
'Outer While loop'
'Nested While loop'
'Nested While loop'
'Nested While loop'
...

This next example includes a break statement inside the nested While loop. This means we’ll write “Outer While loop” and “Nested While loop” over and over, forever. We’ll at least until we stop the execution. In this example, we can get back to the outer While loop.

While ($true) {
    'Outer While loop'

    While ($true) {
        'Nested While loop'
        break
    }
}
'Outer While loop'
'Nested While loop'
'Outer While loop'
'Nested While loop'
'Outer While loop'
'Nested While loop'
...

Our final example, will include the word outer, after the break statement. In this example, we’ll execute the outer While loop, execute the inner While loop, and then break out of both of the While loops. Yes both, from inside the inner loop.

I didn’t even know this was possible before the weekend. My nested menu system is absolutely going to need this! Now, I can allow my users to quit, no matter how deep their level nestation — yes, I totally just made up that word. Enjoy, and maybe it’s helpful for you one day!

While ($true) {
    'Outer While loop'

    While ($true) {
        'Nested While loop'
        break outer
    }
}
'Outer While loop'
'Nested While loop'

Mike Robbins’ PowerShell 101

If you haven’t heard of Leanpub, then that’s about to happen. I only know so much about it, but I can already see the benefits to it. It allows authors to write, publish, and distribute eBooks as they’re being written, and of course at completion. The times are changing, as is technology, and so good authors can’t always be expected to print their books. Trust me, I’m currently reading an AWS book from 2015.

I briefly want to mention a project by Mike F Robbins on Leanpub entitled, PowerShell 101. If you’re not following him on Twitter, you ought to be. In January of this year (2017), he introduced his project to the community; he wanted to write and author an entry-level PowerShell book for anyone that wants to learn PowerShell. The neat thing here, is that he desired to share things he wish he would’ve been told, when he was just starting out. The book discusses the help system, objects, using the pipeline, and more topics you’d expect in a book of this type.

So, I sent Mike a message on Twitter. I told him I would be willing to read the book, as he was writing it, and try and help his publishing efforts by offering comments, critiques, and edits. I wasn’t implying he couldn’t do it alone, but did want to offer my writing and editing skills, and PowerShell knowledge toward his project.

Mike agreed, and so over the last several months, I’ve read chapter by chapter, as Mike’s been pumping them out. I’ve provided what I can to help his endeavor — not that he really needed me — but so he had another set of eyes. I feel it’s important that there’s been another person reading along, and thinking about his instruction, as a newcomer to PowerShell.

So with that, share the book as you can and as appropriate. PowerShell is fun. It’s even more fun, when you have those basics deep in your pocket, and so I’m glad to have been a part of this project. I truly hope this book can bring clarity to someone’s learning, as they move themselves into being a part of our PowerShell community.

If you didn’t catch the above link, here it is again: PowerShell 101.

 

PowerShell Saturday Returns to Phoenix

Phoenix PowerShell Saturday 2017

For the second year in a row, I’ve agreed to speak at the Phoenix PowerShell Saturday. Last year was a great opportunity for me in regard to both speaking — my first go at that, when combined with PowerShell — and learning. Thanks so much to Jason Helmick (who gave an amazing talk), and the others involved. Speaking is a small price to pay for a free, full day of PowerShell discussion and learning.

Based on the feedback from Thom Schumacher, both a speaker and event organizer, I get the feeling I did an adequate job. I may never really believe that, but I’ve agreed to head north again this October and talk about my favorite topic, with those that choose to attend. Yes, it’s PowerShell.

As of now, I’ve decided to share my newest I-wrote-it-mostly-at-home-kinda work project. What I’ve done is written a function template with built-in logging, and it’s much better than my 1.x versions (link). This could change, I suppose, but for now, I’m proud of what I’ve written and I’d like an opportunity to talk about and hand it off to the PowerShell community. The only thing I know thus far, is that’s it’s in Phoenix in October 2017. Watch the #PowerShell hashtag on Twitter for more information.

Phoenix PowerShell Saturday 2016

Here’s a few links from the 2016 event. The first below link is from one session where I discussed the fundamental three (cmdlets): Get-Help, Get-Command, and Get-Member. The second link is from a second session where I discussed how to get into writing reusable code. In this session, I additionally walked though various language constructs (If, If-Else, Do, and more). The final below link is a link to my speaker profile for 2016. I needed a place to put these links, and with the upcoming PowerShell Saturday for 2017, this seemed like as good place as any.

http://powershellsaturday.com/012/presentation/new-to-powershell-session-4/
http://powershellsaturday.com/012/presentation/new-to-powershell-session-6/
http://powershellsaturday.com/012/speaker/tommy-maynard/

A Function for my Functions

I’m in the process of working on a new advanced function template for work. It’ll be the 2.0 version of this: http://tommymaynard.com/function-logging-via-write-verbose-2016. The problem I have with the current version of the template is that I could never only log to a file, without writing to the host program, too. In the end, I want a way to do one of four different things when I run a function written using the template. They are (1) not log anything, (2) log to the screen only, (3) log to a file only (this is the addition I’m after), or (4) log to both the screen and a file simultaneously. This is nearing completion, but one of the downsides to a well thought out and robust advanced function template is that it’s getting, really long. I honestly believe I’m at over 100 lines now (the 1.x versions are in the 70ish line territory).

So, I’m sitting around and thinking of a way to disguise the template, or add my code to it when it’s done, and so I wrote the following pieces of PowerShell. No saying that I’ll ever use this, but it seemed worthy enough to share here. I’m out to automatically add my code to the template, so I don’t have to code around the template’s standard code that allows for the logging.

This first example begins the process of creating a new function named New-FunctionFromTemplate. It’s set to accept three parameters: Begin, Process, and End. For those writing advanced functions, this should be easily recognizable as to why. It’s going to stuff whatever values are supplied to these parameters into these respective sections in the function template, that it displays when it’s done executing. You write the code, and it’ll place that code into the function template, via the New-FunctionFromTemplate function.

Function New-FunctionFromTemplate {
    Param(
        [string]$Begin,
        [string]$Process,
        [string]$End
    )
...
}

Next, I’ve included the remainder of this function. It’s a here-string that includes the function template design and layout, with a place for each variable within it. These variables will be replaced with whatever we send into the New-FunctionFromTemplate function when it’s invoked.

Function New-FunctionFromTemplate {
    Param(
        [string]$Begin,
        [string]$Process,
        [string]$End
    )   

    @"
Function ___-________ {
    [CmdletBinding()]
    Param (
    )

    Begin {
        $Begin
    } # End Begin.

    Process {
        $Process
    } # End Process.

    End {
        $End
    } # End End.
} # End Function: ___-________.
"@
}

Now that we’ve defined our function, let’s use it. As a bit of a shortcut, and as a way to make things a bit more readable, we’ll create a parameter hash table and splat it to the New-FunctionFromTemplate function. The below example could’ve been written as New-FunctionFromTemplate -Begin ‘This is in the Begin block.’ -Process ‘This is in the Process block.’ … etc., but I’m opted to not to that, to make things a bit easier to read and comprehend.

$Params1 = @{
    Begin = 'This is in the Begin block.'
    Process = 'This is in the Process block.'
    End = 'This is in the End block.'
}
New-FunctionFromTemplate @Params1

Below is the output the above commands create.

Function ___-________ {
    [CmdletBinding()]
    Param (
    )

    Begin {
        This is in the Begin block.
    } # End Begin.

    Process {
        This is in the Process block.
    } # End Process.

    End {
        This is in the End block.
    } # End End.
} # End Function ___-________.

This thing is, someone’s not typically only going to add a single string — a single sentence, if you will — inside a Begin, Process, or End block. They’re much more likely to add various language constructs and logic, and comments. Here’s a second example to help show how we would go about adding more than just a single string, using a here-string for two of the three blocks.

$Params2 = @{
    Begin = @'
If ($true) {
            'It is true.'
        } Else {
            'It is false.'
        }
'@
    Process = @'
'This is in the Process block.'
'@
    End = @'
If ($false) {
            'It is true.'
        } Else {
            'It is false.'
        }
'@
}
New-FunctionFromTemplate @Params2

When the above code is run, it produces the below output. It includes both our template structure, as well as, the code we want inside each of the blocks. In case you didn’t catch it right away, there’s a bit of a caveat. The first line is right up against the left margin. Even though, it’ll drop everything into it’s proper place — a single tab to the right of the beginning of each block. After that first line, you have to be the one to monitor your code placement, so that when it’s combined with the template, all the indentations line as expected.

Function ___-________ {
    [CmdletBinding()]
    Param (
    )

    Begin {
        If ($true) {
            'It is true.'
        } Else {
            'It is false.'
        }
    } # End Begin.

    Process {
        'This is in the Process block.'
    } # End Process.

    End {
        If ($false) {
            'It is true.'
        } Else {
            'It is false.'
        }
    } # End End.
} # End Function: ___-________.

And, that’s it. While I put this together, I’ve yet to implement it and truly code separately from my current template. We’ll see what the future holds for me, but at least I know I have this option, if I decide it’s really time to use it. Enjoy your Thursday!

PowerShell Code and AWS CloudFormation UserData

Note: This post was written well over a month ago, but was never posted, due to some issues I was seeing in AWS GovCloud. It works 100% of the time now, in both GovCloud and non-GovCloud AWS. That said, if you’re using Read-S3Object in GovCloud, you’re going to need to include the Region parameter name and value.

As I spend more and more time with AWS, I end up back at PowerShell. If I haven’t said it yet, thank you Amazon Web Services, for writing us a PowerShell module.

In the last month, or two, I’ve been getting into the CloudFormation template business. I love the whole UserData option we have—injecting PowerShell code into an EC2 instance during its initialization, and love, that while we can do it in the AWS Management Console, we can do it with CloudFormation (CFN) too. In the last few months, I’ve decided to do things a bit differently. Instead of dropping large amounts of PowerShell code inside my UserData property in the CFN template, I decided to use Read-S3Object to copy PowerShell modules to EC2 instances, and then just issue calls to the functions in the remainder of the CFN UserData. In one instance, I went from 200+ lines of PowerShell in the CFN template to just a few.

To test, I needed to verify if I could get a module folder and file into the proper place on the instance and be able to use the module’s function(s) immediately, without any need to end one PowerShell session and start a new one. I suspected this would work just fine, but it needed to be seen.

Here’s how the testing went: On my Desktop, I have a folder called MyModule. Inside the folder, I have a file called MyModule.psm1. If you haven’t seen it before, this file extension indicates the file is a PowerShell module file. The contents of the file, are as follows:

Function Get-A {
    'A'
}

Function Get-B {
    'B'
}

Function Get-C {
    'C'
}

The file contents indicate that the module contains three functions: Get-A, Get-B, and Get-C. In the next example, we can see that the Desktop folder isn’t a place where a module file and folder can exist, where we can expect that the modules will be automatically loaded into the PowerShell session. PowerShell isn’t aware of this module on its own, as can be seen below.

Get-A
Get-A : The term 'Get-A' is not recognized as the name of a cmdlet, function, script file, or operable program. Check
the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ Get-A
+ ~~~~~
    + CategoryInfo          : ObjectNotFound: (Get-A:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException
Get-B
Get-B : The term 'Get-B' is not recognized as the name of a cmdlet, function, script file, or operable program. Check
the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ Get-B
+ ~~~~~
    + CategoryInfo          : ObjectNotFound: (Get-B:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException
Get-C
Get-C : The term 'Get-C' is not recognized as the name of a cmdlet, function, script file, or operable program. Check
the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ Get-C
+ ~~~~~
    + CategoryInfo          : ObjectNotFound: (Get-C:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

While I could tell PowerShell to look on my desktop, what I wanted to do is have my CFN template copy the module folder out of S3 and place it on the instances, in a preferred and proper location: C:\Program Files\WindowsPowerShell\Modules. This is a location that PowerShell checks for modules automatically, and loads them the moment a contained function, or cmdlet, from the module, is requested. My example uses a different path, but PowerShell will check here automatically, as well. As a part of this testing, we’re pretending that the movement from my desktop is close enough to the movement from S3 to an EC2 instance. I’ll obviously test this more with AWS.

Move-Item -Path .\Desktop\MyModule\ -Destination C:\Users\tommymaynard\Documents\WindowsPowerShell\Modules\
Get-A
A
PS > Get-B
B
Get-C
C

Without the need to open a new PowerShell session, I absolutely could use the functions in my module, the moment the module was moved from the Desktop into a folder PowerShell looks at by default. Speaking of those locations, you can view them by returning the value in the $env:PSModulePath environmental variable. Use $env:PSModulePath -split ';' to make it easier to read.

Well, it looks like I was right. I can simply drop those module folders on the EC2 instance, into C:\Program Files\WindowsPowerShell\Modules, just before they’re used with no need for anything more than the current PowerShell session that’s moving them into place.

Update: After this on-my-own-computer test, I took it to AWS. It works, and now it’s the only way I use my CFN template UserData. I write my function(s), house them in a PowerShell module(s), copy them to S3, and finally use my CFN UserData to copy them to the EC2 instance. When that’s complete, I can call the contained function(s) without any hesitation, or additional work. It wasn’t necessary, but I added sleep commands between the function invocations. Here’s a quick, modified example you might find in the UserData of one of my CloudFormation templates.

      UserData:
        Fn::Base64:
          !Sub |
          <powershell>
            # Download PowerShell Modules from S3.
            $Params = @{
              BucketName = 'windows'
              Keyprefix = 'WindowsPowerShell/Modules/ProjectVII/'
              Folder = "$env:ProgramFiles\WindowsPowerShell\Modules"
            }
            Read-S3Object @Params | Out-Null

            # Invoke function(s).
            Set-TimeZone -Verbose -Log
            Start-Sleep -Seconds 15

            Add-EncryptionType -Verbose -Log
            Start-Sleep -Seconds 15

            Install-ProjectVII -Verbose -Log
            Start-Sleep -Seconds 15
            </powershell>

Use PowerShell to Edit a CSV


Notice: There is now a revisited, or second, Edit a CSV post. This time, however, we are obtaining and writing portions of a REST API response to our CSV.


I recently completed a scripting assignment for work. Yeah… it’s a part of what I do. During the process I learned something new, that I hadn’t known before. And like it normally does, this leaves me in a position to share it with the Internet, as well as help solidify it in my own mind. And that’s, why I’m writing today.

I thought my recent assignment had a need to edit an existing CSV file on the fly. It turns out it wasn’t necessary for the project, but just maybe it will be for another one, or maybe even one that’s sitting in front of you right now. So, how do you do that? How do you edit an existing CSV file?

Let’s begin with a simple, yet worthy CSV file as an example. In the below CSV data, we have four columns, fields, or properties — whatever you want to call them at this point. They are Name, Status, BatchName, and SentNotification. The idea here — or what I thought it was in my recent assignment, at first — was to notify by email the users that were in a Synced state (Status column) and then modify the SentNotification field so that it said True, instead of False. Take a look at our example data.

Name,Status,BatchName,SentNotification
landrews,Other,MigrationService:FirstBatch-to-O365,FALSE
lpope,Synced,MigrationService:FirstBatch-to-O365,FALSE
ljohnson,Other,MigrationService:FirstBatch-to-O365,FALSE
mlindsay,Other,MigrationService:FirstBatch-to-O365,FALSE
rperkins,Synced,MigrationService:FirstBatch-to-O365,FALSE
dstevenson,Other,MigrationService:FirstBatch-to-O365,FALSE
jbradford,Other,MigrationService:FirstBatch-to-O365,FALSE
jsmith,Other,MigrationService:FirstBatch-to-O365,FALSE
mdavidson,Synced,MigrationService:FirstBatch-to-O365,FALSE
bclark,Synced,MigrationService:FirstBatch-to-O365,FALSE

Let’s first begin by using Import-Csv and Format-Table to view our data.

Import-Csv -Path '.\Desktop\UCSVTestFile.csv' | Format-Table -AutoSize

Name       Status BatchName                           SentNotification
----       ------ ---------                           ----------------
landrews   Other  MigrationService:FirstBatch-to-O365 FALSE           
lpope      Synced MigrationService:FirstBatch-to-O365 FALSE           
ljohnson   Other  MigrationService:FirstBatch-to-O365 FALSE           
mlindsay   Other  MigrationService:FirstBatch-to-O365 FALSE           
rperkins   Synced MigrationService:FirstBatch-to-O365 FALSE           
dstevenson Other  MigrationService:FirstBatch-to-O365 FALSE           
jbradford  Other  MigrationService:FirstBatch-to-O365 FALSE           
jsmith     Other  MigrationService:FirstBatch-to-O365 FALSE           
mdavidson  Synced MigrationService:FirstBatch-to-O365 FALSE           
bclark     Synced MigrationService:FirstBatch-to-O365 FALSE

Now what we need to do, is modify the content, as it’s being imported. Let’s start first, however, by piping our Import-Csv cmdlet to ForEach-Object and returning each object (line).

Import-Csv -Path '.\Desktop\UCSVTestFile.csv' | ForEach-Object {
    $_
}

Name       Status BatchName                           SentNotification
----       ------ ---------                           ----------------
landrews   Other  MigrationService:FirstBatch-to-O365 FALSE           
lpope      Synced MigrationService:FirstBatch-to-O365 FALSE           
ljohnson   Other  MigrationService:FirstBatch-to-O365 FALSE           
mlindsay   Other  MigrationService:FirstBatch-to-O365 FALSE           
rperkins   Synced MigrationService:FirstBatch-to-O365 FALSE           
dstevenson Other  MigrationService:FirstBatch-to-O365 FALSE           
jbradford  Other  MigrationService:FirstBatch-to-O365 FALSE           
jsmith     Other  MigrationService:FirstBatch-to-O365 FALSE           
mdavidson  Synced MigrationService:FirstBatch-to-O365 FALSE           
bclark     Synced MigrationService:FirstBatch-to-O365 FALSE

Hey, look at that. It’s the same thing. The $_ variable represents the current object, or the current row, — if it helps to think about it that way — in the pipeline. Let’s add an If statement inside our ForEach-Object loop, and get the results we’re after. Remember, if a user has a Synced status, we want to change their SentNotification property to $true, and perhaps notify them, had this been more than just an example.

Import-Csv -Path '.\Desktop\UCSVTestFile.csv' | ForEach-Object {
    If ($_.Status -eq 'Synced' -and $_.SentNotification -eq $false) {
        $_.SentNotification = $true
    }
    $_
} | Format-Table -AutoSize

Name       Status BatchName                           SentNotification
----       ------ ---------                           ----------------
landrews   Other  MigrationService:FirstBatch-to-O365 FALSE           
lpope      Synced MigrationService:FirstBatch-to-O365 True            
ljohnson   Other  MigrationService:FirstBatch-to-O365 FALSE           
mlindsay   Other  MigrationService:FirstBatch-to-O365 FALSE           
rperkins   Synced MigrationService:FirstBatch-to-O365 True            
dstevenson Other  MigrationService:FirstBatch-to-O365 FALSE           
jbradford  Other  MigrationService:FirstBatch-to-O365 FALSE           
jsmith     Other  MigrationService:FirstBatch-to-O365 FALSE           
mdavidson  Synced MigrationService:FirstBatch-to-O365 True            
bclark     Synced MigrationService:FirstBatch-to-O365 True

In the above example, we use an If statement to check the values of two properties. If Status is Synced and SentNotification is $false, we’ll change SentNotification to $true. You can see that this worked. But what now? You see, the file from which we did our import is still the same. In order to update that file, we have a bit more work to do.

I wish I could say pipe directly back to the file; however, that doesn’t work. The file ends up being blank. It makes sense it doesn’t work though, as we’re literally reading each object — each row — and then trying to write back to the file in the same pipeline. Something is bound to go wrong, and it does. So, don’t do what’s in the below example, unless your goal is to fail at this assignment and wipe out your data. If that’s what you’re after, then by all means, have at it.

Import-Csv -Path '.\Desktop\UCSVTestFile.csv' | ForEach-Object {
    If ($_.Status -eq 'Synced' -and $_.SentNotification -eq $false) {
        $_.SentNotification = $true
    }
    $_
} | Export-Csv -Path '.\Desktop\UCSVTestFile.csv' -NoTypeInformation

What we need to do instead, is Export to a file with a different name, so that when we’re done, both files exist at the same time. Then, we remove the original file and rename the new one with the old one’s name. Here’s the entire example; take a look. And then after that, enjoy the weekend. Oh wait, tomorrow is only Friday. I keep thinking it’s the weekend, because I’m home tomorrow to deal with 1,000 square feet of sod. If only PowerShell could lay the sod for me.

Import-Csv -Path '.\Desktop\UCSVTestFile.csv' | ForEach-Object {
    If ($_.Status -eq 'Synced' -and $_.SentNotification -eq $false) {
        $_.SentNotification = $true
    }
    $_
} | Export-Csv -Path '.\Desktop\UCSVTestFile-temp.csv' -NoTypeInformation
Remove-Item -Path '.\Desktop\UCSVTestFile.csv'
Rename-Item -Path '.\Desktop\UCSVTestFile-temp.csv' -NewName 'UCSVTestFile.csv'

Push-Location’s Two for One

Sometimes, my life only has time for these short, little lessons.

Today, I learned something new, and without even thinking on it long, I went straight to my blog in order to share it. More or less, anyway. Before we get to the interesting part, let’s quickly discuss three PowerShell cmdlets: Set-Location, Push-Location, and Pop-Location.

Set-Location allows us to relocate ourselves within the file system. This is to say, that we can use this cmdlet to move around from folder to folder, and drive to drive. If I’m at the root of the C:\ drive, I can move to C:\Users, and if I’m in C:\Users and want to move to C:\Windows, I can also use Set-Location, or one of the aliases (cd, chdir, and sl), to get myself there. Here’s a quick example, before we move on.

PS C:\> Set-Location -Path C:\Users
PS C:\Users> Set-Location -Path C:\Windows
PS C:\Windows> Set-Location -Path \
PS C:\>

Push-Location’s purpose is to take our current location in the file system and add it to the location stack. We won’t delve into this too deeply, but picture it this way: It takes our current location in the file system — C:\, or C:\Users, or wherever — and puts it on a piece of paper, on top of a stack of other papers. Now we can reference our paper on top of the stack, to find our previous location the next time we need it.

And that, brings us to Pop-Location. Pop-Location gets the most recent entry on the location stack — that top piece of paper, if you will, and moves us to that location in the file system. Here’s an example of both Push-Location and Pop-Location.

PS C:\> # Our current location is the C:\ drive.
PS C:\> Push-Location
PS C:\> Set-Location -Path C:\Users
PS C:\Users> Pop-Location
PS C:\>

That introduction brings us to something I would’ve thought, I would have already known. Today I learned that Push-Location offers us a two-for-one. Not only will it place the current location on the location stack, as we’d expect, but it can also move us to a different location, such as Set-Location does. Watch.

PS C:\> # Back on the C:\ drive.
PS C:\> Push-Location -Path C:\Windows
PS C:\Windows> Pop-Location
PS C:\> Push-Location -Path C:\Users\tommymaynard
PS C:\Users\tommymaynard> Pop-Location
PS C:\>

With this tidbit of new information, I set out to replace the Set-Location cmdlet in my $PROFILE. Now when I use Set-Location — my new function — I’ll really be using Push-Location. Therefore, I can always return to the previous location in the filesystem with Pop-Location. Always.

Function Set-Location {
    Param (
       [string]$Path
    )
    Push-Location -Path $Path
}

PS C:\> # As you can see, I'm at the root of the C:\ drive.
PS C:\> Set-Location -Path C:\Windows
PS C:\Windows> Pop-Location
PS C:\>

Silent Install from an ISO

In the last several weeks, I’ve been having a great time writing PowerShell functions and modules for new projects moving to Amazon Web Services (AWS). I’m thrilled with the inclusion of UserData as a part of provisioning an EC2 instance. Having developed my PowerShell skills, I’ve been able to leverage them in conjunction with UserData to do all sorts of things to my instances. I’m reaching into S3 for installers, expanding archive files, creating folders, bringing down custom written modules in UserData and invoking the contained functions from them there, too. I’m even setting the timezone. It’s seems so straight forward sure, but getting automation and logging wrapped around that need, is rewarding.

As a part of an automated SQL installation — yes, the vendor told me they don’t support AWS RDS — I had a new challenge. It wasn’t overly involved by any means, but it’s worthy of sharing, especially if someone hits this post in a time of need, and gets a problem solved. I’ve said it a millions times: I often write, so I have a place to put things I may forget, but truly, it’s about anyone else I can help, as well. I’ve been at that almost three years now.

Back to Microsoft SQL: It’s on an ISO. I’ve been pulling down Zip files for weeks, in various projects, with CloudFormation, and expanding them, but this was a new one. I needed to get at the files in that ISO to silently run an installation. Enter the Mount-DiskImage function from Microsoft’s Storage module. Its help synopsis says this: “Mounts a previously created disk image (virtual hard disk or ISO), making it appear as a normal disk.” The command to pull just that help information is listed below.

PS > (Get-Help -Name Mount-DiskImage).Synopsis

As I typically do, I started working with the function in order to learn how to use it. It works as described. Here’s the command I used to mount my ISO.

PS > Mount-DiskImage -ImagePath 'C:\Users\tommymaynard\Desktop\SQL2014.ISO'

The above example doesn’t produce any output by default, and I rather like it that way. After a dismount — it’s the same above command with Dismount-DiskImage instead of Mount-DiskImage — I tried it with the -PassThru parameter. This parameter returns an object with some relevant information.

PS > Mount-DiskImage -ImagePath 'C:\Users\tommymaynard\Desktop\SQL2014.ISO' -PassThru

Attached          : False
BlockSize         : 0
DevicePath        :
FileSize          : 2606895104
ImagePath         : C:\Users\tommymaynard\Desktop\SQL2014.ISO
LogicalSectorSize : 2048
Number            :
Size              : 2606895104
StorageType       : 1
PSComputerName    :

The first thing I noticed about this output is that it didn’t provide the drive letter used to mount the ISO. I was going to need that drive letter in PowerShell, in order to move to that location and run the installer. Even if I didn’t move to that location, I needed the drive letter to create a full path. The drive letter was vital, and this, is why we’re here today.

Update: See the below post replies where Get-Volume is used to discover the drive letter.

Although the warmup here seemed to take a bit, we’re almost done here for today. I’ll drop the code below, and we’ll do a quick, line-by-line walk through.

# Mount SQL ISO and run setup.exe.
PS > $DrivesBeforeMount = (Get-PSDrive).Name
PS >
PS > Mount-DiskImage -ImagePath 'C:\Users\tommymaynard\Desktop\SQL2014.ISO'
PS >
PS > $DrivesAfterMount = (Get-PSDrive).Name
PS >
PS > $DriveLetterUsed = (Compare-Object -ReferenceObject $DrivesBeforeMount -DifferenceObject $DrivesAfterMount).InputObject
PS >
PS > Set-Location -Path "$DriveLetterUsed`:\"

Line 2: The first command in this series, stores the name property of all the drives in our current PowerShell session in a variable named $DrivesBeforeMount. That name should offer some clues.

Line 4: This line should look familiar; it mounts our SQL 2014 ISO (to a mystery drive letter).

Line 6: Here, we run the same command as in Line 2, however, we store the results in $DrivesAfterMount. Do you see what we’re up to yet?

Line 8:  This command compares our two recently created Drive* variables. We want to know which drive is there now, that wasn’t when the first Get-PSDrive command was run.

Line 10: And finally, now that we know the drive letter used for our newly mounted ISO, we can move there in order to access the setup.exe file.

Okay, that’s it for tonight. Now back to working on a silent SQL install on my EC2 instance.