Category Archives: Quick Learn

Practical examples of PowerShell concepts gathered from specific projects, forum replies, or general PowerShell use.

Require Use of a Cmdlet’s Prefix

I learned something new today. That’s always fun. Okay, not always, but when it’s about PowerShell, it often is. When you import a module — we’ll use the Active Directory module as an example — and use a prefix, you can still use the original cmdlet names. We’ll begin by importing the Active Directory PowerShell module to include a prefix.

PS > Import-Module -Name ActiveDirectory -Prefix Lol

With that module now imported, we’ll execute a Get-Command, command. As the Name parameter allows for wildcards, we’ll instruct it to return all the commands PowerShell knows about, at this stage in the session, that include the ADUser suffix from the ActiveDirectory module.

PS > Get-Command -Name '*ADUser' -Module ActiveDirectory

CommandType     Name                      Version    Source
-----------     ----                      -------    ------
Cmdlet          Get-LolADUser             1.0.0.0    ActiveDirectory
Cmdlet          New-LolADUser             1.0.0.0    ActiveDirectory
Cmdlet          Remove-LolADUser          1.0.0.0    ActiveDirectory
Cmdlet          Set-LolADUser             1.0.0.0    ActiveDirectory

The above command results indicates that PowerShell knows about four commands that end with the ADUser suffix. Each of those commands that it knows about, include the Lol prefix we used in the first command we issued, Import-Module. So far, everything is going as expected. Or is it?

Check this out. When we opt to not use the Module parameter and use Where-Object and the Source property instead, we have some different results. This is just getting weird, but at minimum, there’s a bit of proof here that the original cmdlets still exist, as well as the prefixed versions.

PS > Get-Command -Name '*ADUser' | Where-Object -Property Source -eq ActiveDirectory

CommandType     Name                      Version    Source
-----------     ----                      -------    ------
Cmdlet          Get-ADUser                1.0.0.0    ActiveDirectory
Cmdlet          Get-LolADUser             1.0.0.0    ActiveDirectory
Cmdlet          New-ADUser                1.0.0.0    ActiveDirectory
Cmdlet          New-LolADUser             1.0.0.0    ActiveDirectory
Cmdlet          Remove-ADUser             1.0.0.0    ActiveDirectory
Cmdlet          Remove-LolADUser          1.0.0.0    ActiveDirectory
Cmdlet          Set-ADUser                1.0.0.0    ActiveDirectory
Cmdlet          Set-LolADUser             1.0.0.0    ActiveDirectory

If we reissue the Get-Command command and indicate we only want the Get-ADUser command specifically, it shows up, too. Still, it’s odd that it wasn’t included in the first results. I think most of us would have expected to see it there, too. We’ll get back to this in a moment.

PS > Get-Command -Name Get-ADUser -Module ActiveDirectory

CommandType     Name                      Version    Source
-----------     ----                      -------    ------
Cmdlet          Get-ADUser                1.0.0.0    ActiveDirectory

About this time in my poking around, is when I started examining the imported module. The first thing I noted, due to the default output, was that the first two commands I could see, under ExportedCommands, didn’t include the noun prefix. It’s more proof the original cmdlets are still around.

PS > Get-Module -Name ActiveDirectory

ModuleType Version Name            ExportedCommands
---------- ------- ----            ----------------
Manifest   1.0.0.0 ActiveDirectory {Add-ADCentralAccessPolicyMember, Add-ADComputerServiceAcc...

The next command I ran was to better view these exported commands. I haven’t included all the results, but I’m sure you’ll get the idea.

PS > (Get-Module -Name ActiveDirectory).ExportedCommands

Key                                               Value
---                                               -----
Add-ADCentralAccessPolicyMember                   Add-LolADCentralAccessPolicyMember
Add-ADComputerServiceAccount                      Add-LolADComputerServiceAccount
Add-ADDomainControllerPasswordReplicationPolicy   Add-LolADDomainControllerPasswordReplicationPolicy
Add-ADFineGrainedPasswordPolicySubject            Add-LolADFineGrainedPasswordPolicySubject
Add-ADGroupMember                                 Add-LolADGroupMember
Add-ADPrincipalGroupMembership                    Add-LolADPrincipalGroupMembership
Add-ADResourcePropertyListMember                  Add-LolADResourcePropertyListMember
Clear-ADAccountExpiration                         Clear-LolADAccountExpiration
Clear-ADClaimTransformLink                        Clear-LolADClaimTransformLink
Disable-ADAccount                                 Disable-LolADAccount
...

You’ll notice that we’re dealing with a hash table. For every key named with the original cmdlet name, we have a value that is the cmdlet name with the noun prefix. The hash table appears that it may be used to know the relationship between the original cmdlets and its matching prefixed version. For fun, I opened a new PowerShell session and imported the Active Directory module without the prefix. In that case, the above hash table had the same command for a Key and its matching Value.

This finding made me think of something. Perhaps that Get-Command, command that included the Module parameter, only checked for commands against the values in this hash table, but the Where-Object cmdlet, returned the keys and values from the hash table. Just a thought; I said I get back to this a moment ago.

So here’s why I really decided to write. What if I don’t want to allow a user to use the original command. I don’t want them to use Get-ADUser to invoke Get-ADUser; instead, I want them to use the prefixed version only. Why, I don’t exactly know, but here’s how I fixed that one.

We’ll pretend that we’ve yet to import the Active Directory module; we’re starting fresh with the below sequence of commands. We’ll begin by creating two variables: $Module and $Prefix. In the subsequent command, we’ll import the Active Directory module providing it’s yet to be imported. When it’s imported, we’ll include the Prefix parameter with the Lol value. This means, Get-ADUser becomes Get-LolADUser, as can be figured out from the previous examples.

PS > $Module = 'ActiveDirectory'; $Prefix = 'Lol'

PS > If (-Not(Get-Module -Name $Module)) {
>>> Import-Module -Name $Module -Prefix $Prefix
>>> }

PS > $ExportedCommands = (Get-Module -Name $Module).ExportedCommands

PS > Foreach ($Command in $ExportedCommands.GetEnumerator()) {
>>> New-Item -Path "Function:\$($Command.Key)" -Value "Write-Warning -Message 'Please use the $($Command.Key.Replace('-',"-$Prefix")) function.'" -Force
>>> }

After that, we’ll get all of the ExportedCommand into a variable with the same name. Then, the Foreach command iterates over the keys in our hash table — the original cmdlet names — and creates a function for each one of them. One minute, Get-ADUser executes as expected, and the next minute, it doesn’t. Now, every time someone enters the actual AD cmdlet name, a function with the same name will execute instead. It will indicate to the user to use the prefixed version. Here’s a few examples.

PS > Get-ADUser -Identity tommymaynard
WARNING: Please use the Get-LolADUser function.

PS> Unlock-ADAccount tommymaynard
WARNING: Please use the Unlock-LolADAccount function.

PS > Get-ADDomain
WARNING: Please use the Get-LolADDomain function.

PS > (Get-LolADUser tommymaynard).SamAccountName
tommymaynard

No idea if this might ever prove useful, but at minimum, I now know I can change the default behavior of a cmdlet, in order to point someone toward the prefixed version. Oh, and the reason why the function I created is run instead of the cmdlet, is because of command precedence. Read about it; it’s beneficial to know. I knew it existed and I’ve used it, I’ve just never used it in bulk to create functions to run in place of so many cmdlets. Additionally, however, I’m not sure if I’ve ever created a function by using the New-Item cmdlet directly against the Function PSDrive. Neat.

Enjoy the weekend.

Update: Ah, it’s Monday again. This seems to happen at least once a week. There’s something I wanted to note, and it’s not just the day. Even though we’re using a function to indicate to users to use the prefixed cmdlet, the original cmdlet can still be used. If someone uses the full path to the cmdlet, the function will be ignored. That’s to say, this still works.

PS > ActiveDirectory\Get-ADUser

Linux Prompt on Windows – Part VI

My most recent post in the Linux Prompt on Windows series, is Part V. Now, we’re on VI and it’s all because of PowerShell 6.0.

As 6.0 uses a different $PROFILE script, it was mandatory that I created a new one and quickly copied over my Linux prompt. I hate to be anywhere without it. You can create a $PROFILE script in PowerShell 6.0 the same way we did previously. It’s New-Item -Path $PROFILE -Force (press Enter) followed by notepad $PROFILE (press Enter) to open the file, in Notepad, which should be obvious.

The newest change is adding the version number, just to the right of the host program’s name in the WindowTitle. Here’s an example of how that appears. As it should the WindowTitle indicates it’s PowerShell 6.0.

Type in powershell.exe and press Enter, and the WindowTitle changes. Now we’re in PowerShell 5.1.

Go deeper in time and type powershell.exe -version 2, press Enter, and it changes again, but this time to 2.0. By adding this addition to the WindowTitle via my prompt function, I can move between versions of PowerShell, if needed, and always know the version in which I’m working.

I’ve included the fully updated prompt function below. Copy it into your $PROFILE script, restart PowerShell, and enjoy. I can’t be the only one to appreciate this prompt, especially as PowerShell 6.0 just went GA. As the beta versions before it were, it’s cross platform; therefore, my lookalike Linux prompt makes even more sense now.

Update: I recently cleaned out my new 6.0 $PROFILE script and did what I’ve always done instead. That is to dot source a different profile script (the one loaded by Windows PowerShell 5.1). So, in place of my prompt function, I now have this entry: . C:\Users\tommymaynard\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1. Yeah, you’re reading that right, “WindowsPowerShell.” All my stuff is in there! It should be fun to see what works and what doesn’t!

Update: I got really tired of looking up at the WindowTitle to see my version, so I added it to the prompt, as well. It’s just after the closing square bracket and before the # or $ symbol. By the way, the difference of those two is that # indicates an administrative user, and $ indicates a non-administrative user. Oh yes, and for today, my host is blue. It’s not the exact blue, but close. The font color is also gray (by default), another slight difference between the host configuration for 5.1 vs. 6.0. Anyway, I’ve updated the below code to indicate the version as a part of the prompt itself, just like in the below image.

Function Prompt {
	(Get-PSProvider -PSProvider FileSystem).Home = $env:USERPROFILE

	# Determine if Admin and set Symbol variable.
	If ([bool](([System.Security.Principal.WindowsIdentity]::GetCurrent()).Groups -match 'S-1-5-32-544')) {
		$Symbol = '#'
	} Else {
		$Symbol = '$'
	}
	 
	# Write Path to Location Variable as /.../...
	If ($PWD.Path -eq $env:USERPROFILE) {
		$Location = '/~'
	} ElseIf ($PWD.Path -like "*$env:USERPROFILE*") {
		$Location = "/$($PWD.Path -replace ($env:USERPROFILE -replace '\\','\\'),'~' -replace '\\','/')"
	} Else {
		$Location = "$(($PWD.Path -replace '\\','/' -split ':')[-1])"
	}

	# Determine Host for WindowTitle.
	Switch ($Host.Name) {
		'ConsoleHost' {$HostName = 'consolehost'; break}
		'Windows PowerShell ISE Host' {$HostName = 'ise'; break}
        'Visual Studio Code Host' {$HostName = 'vscode'; break}
		default {}
	}

    # Determine PowerShell version.
    $PSVer = "$($PSVersionTable.PSVersion.Major).$($PSVersionTable.PSVersion.Minor)"

	# Create and write Prompt; Write WindowTitle.
    $UserComputer = "$($env:USERNAME.ToLower())@$($env:COMPUTERNAME.ToLower())" 
    $Location = "$((Get-Location).Drive.Name.ToLower())$Location"

    # Check if in the debugger.
    If (Test-Path -Path Variable:/PSDebugContext) {
        $DebugStart = '[DBG]: '
        $DebugEnd = ']'
    }

    # Actual prompt and title.
    $Host.UI.RawUI.WindowTitle = "$HostName $PSver`: $DebugStart[$UserComputer $Location]$DebugEnd$Symbol"
    "$DebugStart[$UserComputer $Location]$DebugEnd$PSVer$Symbol "
}

 

Linux Prompt on Windows – Part V

http://tommymaynard.com/an-addition-to-the-linux-powershell-prompt-iv-2016/

http://tommymaynard.com/an-addition-to-the-linux-powershell-prompt-ii-2016/

http://tommymaynard.com/an-addition-to-the-linux-powershell-promp-2016/

http://tommymaynard.com/duplicate-the-linux-prompt-2016/

An Advanced Function Template (Version 2.1 -and -gt)

Version 2.1

Back in October 2017, I spoke at the Arizona PowerShell Saturday in Phoenix, Arizona. My topic, was introducing those that attended, to a Windows PowerShell Advanced Function Template. I found a flaw in it which I’ve since fixed in my 2.1 version linked below. For anyone after this template, this is the most current version. Newer versions, providing there are any will be included in this post and not in another, separate post around here. Therefore, 2.1 may not forever be the current version, but the current version will always be a part of this post.

The flaw that took us 2.1 was that all parameters and parameter values are logged. That’s a good thing, but it means that if someone supplied a password as a string, it would be logged (providing the Log parameter was used). Therefore, 2.1 looks for parameter’s with the word “password” in them, and then doesn’t include the parameter value. Instead, it would read like the below example. This isn’t fool proof, but I think it’s a helpful addition.

PS > Get-CPUCount -ComputerName server01 -Password 'test1' -MainPassword 'test2' -PasswordName 'test3' -xPasswordx 'test4' -Log ToScreen
...
[1/10/2018 10:03:27 PM]: [INFO   ] Including the "ComputerName" parameter with the "server01" value.
[1/10/2018 10:03:27 PM]: [INFO   ] Including the "Password" parameter without the parameter value.
[1/10/2018 10:03:27 PM]: [INFO   ] Including the "MainPassword" parameter without the parameter value.
[1/10/2018 10:03:27 PM]: [INFO   ] Including the "PasswordName" parameter without the parameter value.
[1/10/2018 10:03:27 PM]: [INFO   ] Including the "xPasswordx" parameter without the parameter value.
[1/10/2018 10:03:27 PM]: [INFO   ] Including the "Log" parameter with the "ToScreen" value.
...
AdvancedFunctionTemplate2.1 (3988 downloads )

 

Older Posts:

An Advanced Function Template (Version 2.1 -and -gt)

An Advanced Function Template (Version 2.1 -and -gt)

An Alias and Function Change

At current, my site has 230 published posts. It’s 231 when this one goes live. Those are all written by me since mid 2014. I’m not sure of my per month average, but I can say for sure that I’ve never missed a month.

Outside of those 230 published posts, are 40 some drafts. Those are posts I started and never finished for one reason or another. Sometimes, I realized what I thought worked really didn’t, and therefore wasn’t able to complete the post. Well, in an effort to clean up the drafts, I’m publishing my first stupid draft turned post. Here’s goes… It’s learning from my failure — enjoy.

For just about as long as I can remember, I’ve always created an alias for my functions, just before the function is defined. This is of course when a function is defined outside of a module, such as when it’s defined in my $PROFILE script. In the below example, I do just that. After the alias is created, I define the function for which my alias can be used to invoke the function.

Set-Alias -Name saw -Value Show-AWord
Function Show-AWord {
    '!! A Word !!'
}

PS > saw
!! A Word !!

There’s another way to do this, as well. You can create the alias after the function’s been defined. You just swap the commands.

Function Show-EWord {
    '** E Word **'
}
Set-Alias -Name sew -Value Show-EWord

PS > sew
** E Word **

And here’s where the post went stupid.

I’ve always been mildly annoyed that I needed to have the code outside of the function, whether it’s before, or after, the function definition. I always wished there was a way to create aliases from inside the function.

Well there is, and there’s always been. I’ve just never given it much thought until about five minutes ago. This might be why I started this post; I didn’t think about it long enough. Here’s nearly the same function as above; however, now we’ll create the alias for the function, within the function. Because the Set-Alias cmdlet has a scope parameter, we can create a global alias from inside the function.

Function Show-GWord {
    Set-Alias -Name sgw -Value Show-GWord -Scope Global
    '$$ G Word $$'
}

PS > sgw
PS > # Nothing

Here’s about the time, I realized my problem. If you create an alias inside the function (using the Global scope [that’s how it “works”]), the alias is not going to exist until after the function has been invoked for the first time. Therefore, the function would have to be run like the below example. I pretty much removed a line outside the function, put it into the function, and then added another line outside the function. Ugh, not what I was after all.

PS > Show-GWord | Out-Null
PS > sgw
$$ G Word $$

So yeah, this post didn’t go as planned. No wonder it made its home in my drafts. It makes you wonder though. Why isn’t there a way to run some code inside a function when the function is being defined? Maybe because functions belong in modules and modules give you this ability when they’re imported, via their module manifest, and potentially, their ScriptsToProcess.

There you have it. A stupid draft I published.

AWS Write-S3Object Folder Creation Problem

If you were wondering what I occasionally thought about over the holiday break, while I was scrubbing my pool tiles—ugh, it was AWS and the Get- and Write- S3Object cmdlets. Why wasn’t I able to get them to do what I wanted!? Let me first explain my problem, and then my solution.

When you create a folder (and yes, I realize it’s not really a folder), in the AWS Management Console, you do so by clicking the “+ Create folder” button. This guy.

Simple stuff. In my case, you choose the AES-256 encryption setting and then give the folder a name. This folder, because I created it in this manner, is returned by the Get-S3Object cmdlet. So, let’s say we walked through this manual procedure and created a folder called S3ConsoleFolder. Here’s what Get-S3Object would return, providing the S3 bucket, we’re calling tst-bucket, had been empty from the start.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/

Because it could prove helpful, let’s say we also manually created a nested folder inside of our S3ConsoleFolder called S3CFA, as in the A folder of the S3ConsoleFolder. Consider it done. Here’s the new results of the Get-S3Object command. As you’ll see now, we return both the top-level folder, and the newly created, nested folder.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/

Now, let’s get one step closer to using Write-S3Object. Its purpose, according to its maker, Amazon, is that it “uploads one or more files from the local file system to an S3 bucket.” That, it does. Below I’ve indicated the top-level folder, the nested folders, and the files that we’ll upload. The complete path to this folder is C:\Users\tommymaynard\Desktop\TestFolder.

TestFolder
|__ FolderA
|	|__ TestFile2.txt
|	|__ TestFile3.txt
|__ FolderB
|	|__ TestFile4.txt
|	|__ TestFile5.txt
|	|__ TestFile6.txt
|	|__ FolderC
|		|__ TestFile7.txt
|__ TestFile1.txt

The below Write-S3Object command’s purpose is to get everything in the above file structure uploaded to AWS S3. Additionally, it creates the folders we need: TestFolder, FolderA, FolderB, and FolderC. I’m using a parameter hash table below to decrease the length of my command, so it’s easier to read. It’s in no way a requirement.

$Params = @{
    BucketName = 'tst-bucket'
    Folder = 'C:\Users\tommymaynard\Desktop\TestFolder'
    KeyPrefix = (Split-Path -Path 'C:\Users\tommymaynard\Desktop\TestFolder' -Leaf).TrimEnd('\')
    Recurse = $true
    Region = 'us-gov-west-1'
    ServerSideEncryption = 'AES256'
}

With the parameter hash table created, we’ll splat it on the Write-S3Object cmdlet. When completed, we’ll run the Get version of the cmdlet again, to see what was done.

Write-S3Object @Params
(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/FolderA/TestFile2.txt
TestFolder/FolderA/TestFile3.txt
TestFolder/FolderB/FolderC/TestFile7.txt
TestFolder/FolderB/TestFile4.txt
TestFolder/FolderB/TestFile5.txt
TestFolder/FolderB/TestFile6.txt
TestFolder/TestFile1.tst

Now, look at the above results and tell me what’s wrong. I’ll wait…

… Can you see it? What don’t we have included in those results, that we did when we created our folders in the AWS Management Console?

Maybe I was asking for too much, but I expected to have my folders returned on their own lines just like we do for S3ConsoleFolder/ and S3ConsoleFolder/S3CFA/. Remember, I’m lying down on my pool deck, scrubbing the pool tiles (it’s Winter, yes, but I live in southern Arizona), and I cannot for the life of me wrap my head around why I’m not seeing those folders on. their. own. lines. I expected to see these lines within my results:

TestFolder/
TestFolder/FolderA/
TestFolder/FolderB/
TestFolder/FolderB/FolderC/

Remember, I can create folders in the AWS Management Console and it works perfectly, but not with Write-S3Object. Well, not with Write-S3Object the way I was using it. I finally had an idea worth trying: I needed to use Write-S3Object to create the folders first, and then upload the files into the folders. It’s obnoxious. While that required more calls to Write-S3Object, I was okay with it, if it could get me the results I wanted. And ultimately, get my users the results I wanted them to have.

So let’s dump my TestFolder from S3 and start over. We’re here again.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/

We’ll start by creating our parameter hash table. After that, we’ll begin to use the $Param variable and its properties (its keys) to supply parameter values to parameter names. I don’t actually splat the entire hash table in this code section. Although the next three code sections go together, I’ve broken them up, so I can better explain them. This first section creates a $Path variable, an aforementioned parameter hash table (partially based on the $Path variable), and an If statement. The If statement works this way: If my S3 bucket doesn’t already include a folder called TestFolder/, then create it.

$Path = 'C:\Users\tommymaynard\Desktop\TestFolder'
$Params = @{
    BucketName = 'tst-bucket'
    Folder = (Split-Path -Path $Path -Leaf).TrimEnd('\')
    KeyPrefix = (Split-Path -Path $Path -Leaf).TrimEnd('\')
    Recurse = $true
    Region = 'us-gov-west-1'
    ServerSideEncryption = 'AES256'
}

# Create top-level folder (if necessary).
If ((Get-S3Object -BucketName $Params.BucketName -Region $Params.Region).Key -notcontains "$($Params.Folder)/") {
    Write-S3Object -BucketName $Params.BucketName -Region $Params.Region -Key "$($Params.Folder)/" -Content $Params.Folder -ServerSideEncryption AES256
}

Now, Get-S3Object returns my newly created, top-level folder. It was working so far.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/

This second section gets all of the directory’s names from my path. If there are duplicates, and there were, they’re removed by Select-Object‘s Unique parameter. Once I know these, I can start creating my nested folders after cleaning them up a little: splitting the path, replacing backslashes with forward slashes, and removing any forward slashes from the beginning of the path. With each of those, we’ll make sure the cleaned-up path doesn’t include two forward slashes (this would indicate it’s the top-level folder again [as TestFolder//]), and that it doesn’t already exist.

# Create nested level folder(s) (if necessary).
$NestedPaths = (Get-ChildItem -Path $Path -Recurse).DirectoryName | Select-Object -Unique
Foreach ($NestedPath in $NestedPaths) {

    $CleanNestedPath = "$(($NestedPath -split "$(Split-Path -Path $Path -Leaf)")[-1].Replace('\','/').TrimStart('/'))"

    If (("$($Params.Folder)/$CleanNestedPath/" -notmatch '//') -and ((Get-S3Object -BucketName $Params.BucketName -Region $Params.Region).Key -notcontains "$($Params.Folder)/$CleanNestedPath/")) {
        Write-S3Object -BucketName $Params.BucketName -Region $Params.Region -Key "$($Params.Folder)/$CleanNestedPath/" -Content $CleanNestedPath -ServerSideEncryption AES256
    }
}

And, now Get-S3Object returns my nested folders, too. Still working.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/
TestFolder/FolderA/
TestFolder/FolderB/
TestFolder/FolderB/FolderC/

This last section only serves to upload the files from the EC2 instance to the S3 bucket and into the folders we’ve created. Like the other code did in the last two sections, this doesn’t check that files already exist. It’ll happily write, right over them without warning. I didn’t need this protection, so I didn’t include it.

Write-S3Object -BucketName $Params.BucketName -Region $Params.Region -Folder $Path -KeyPrefix "$($Params.Folder)/" -ServerSideEncryption AES256 -Recurse

Now that my folders are created and the files are uploaded, I get the results I expect. I can see all the folders on their own lines, as well as that of the files.

(Get-S3Object -BucketName 'tst-bucket' -Region 'us-gov-west-1').Key
S3ConsoleFolder/
S3ConsoleFolder/S3CFA/
TestFolder/
TestFolder/FolderA/
TestFolder/FolderA/TestFile2.txt
TestFolder/FolderA/TestFile3.txt
TestFolder/FolderB/
TestFolder/FolderB/FolderC/
TestFolder/FolderB/FolderC/TestFile7.txt
TestFolder/FolderB/TestFile4.txt
TestFolder/FolderB/TestFile5.txt
TestFolder/FolderB/TestFile6.txt
TestFolder/TestFile1.txt

I do hope I didn’t overlook an easier way to do this, but as history has proved, it’s quite possible. Here’s to hoping this can help someone else. I felt pretty lost and confused until I figured it out. It seems to me that AWS needs to iron this one out for us. No matter how it’s used, Write-S3Object should create “folders” in such a way that they are consistently returned by Get-S3Object. That’s whether they’re created before files are uploaded (my fix), or simply created as files are uploaded.

And, cue the person to tell me the easier way.

Comfortably Save a Pester Test

I’ve spent some time with Pester this week. While exploring the OutputFile parameter, it quickly became clear that the best I seemed to be able to output was some form of XML, and honestly, that wasn’t good enough for me in my moment of discovery. While I intend to make myself a Pester expert in the coming year (2018), there’s somethings I don’t know 100% yet, and so I understand it’s possible that I sound like an idiot, as I may not know about x in regard to Pester.

While working with Invoke-Pester, I came across the -PassThru parameter. Its purpose in life, in relation to the Invoke-Pester command, is to create a PSCustomObject. Now that’s, something I can work with. The idea here is to invoke Pester against an AWS instance, export my object (to an XML standard I can deal with [thank you Export-Clixml]), and write it to S3. Then, I can download the file to a computer that’s not my newly configured AWS instance, and check for what tests passed and failed. This, without the need to RDP (Remote Desktop), to the instance, and visually and manually check to see what did and didn’t work from there. We’re getting closer and closer to RDP being a security incident.

My example is not going to include an actual Pester run, so instead we’ll jump directly to the Invoke-Pester command and what we’ll do after that command has executed. This first example is a fairly standard way of invoking Pester (with parameters) and creating an output file. Again, this isn’t the output I’m after.

PS > Invoke-Pester -Script @{Path = 'C:\WkDir\HostAcceptance.Tests.ps1'; Parameters = @{Project = 'trailking'; Environment = 'tst'}} -OutputFile 'C:\WkDir\PesterOutputXml.xml'

Instead we’re going to include some other parameters, to include PassThru, Show, and OutVariable. PassThru will provide us an object that contains all of the Pester results, Show with the None value will hide the Pester test results, and OutVariable will get that object (that again, contains all of the Pester results) into the PesterResults variable.

PS > Invoke-Pester -Script @{Path = 'C:\WkDir\HostAcceptance.Tests.ps1'; Parameters = @{Project = 'trailking'; Environment = 'tst'}} -PassThru -Show None -OutVariable PesterResults

I was mainly okay with the above command; however, it was still writing my object to the host program, and I wanted nothing to be displayed, at all. It’s too bad that option isn’t in there by default, but I’m okay with improvisation. Again, I know PowerShell better than Pester, so there is the possibility I just don’t know enough at the immediate moment to not have done this a potentially better way. Anyway, the below example removes all the output to the host, but still stuffs my results into the PesterResults variable.

PS > [System.Void](Invoke-Pester -Script @{Path = 'C:\WkDir\HostAcceptance.Tests.ps1'; Parameters = @{Project = 'trailking'; Environment = 'tst'}} -PassThru -Show None -OutVariable PesterResults)

Okay, now what? Everything I need and more is now in the $PesterResults variable. Next, we’ll export it into an XML format I (as in Import-Clixml), can deal with.

PS > $PesterResults | Export-Clixml -Path 'C:\WkDir\PesterExport.xml'

Now that it’s in a usable format, I’m going to read it back in. You can go ahead and pretend that I’ve moved my exported XML file from the computer on which it was created, and now I’m about to read it in on a different computer. This visualization is as though it was uploaded to AWS S3, and downloaded from S3 on a different device. We’ll say it’s on my Desktop.

$PesterResults = Import-Clixml -Path 'C:\Users\tommymaynard\Desktop\PesterStuff\PesterExport.xml'
Foreach ($Result in $PesterResults.TestResult) {
    [System.Array]$Object += [PSCustomObject]@{
        Describe = $Result.Describe.TrimEnd(':')
        Context = $Result.Context.TrimEnd(':')
        It = $Result.Name.TrimEnd(':')
        Should = $Result.Result
        Time = $Result.Time
    }
} 

$Object
# $Object | Format-Table -Autosize
# $Object | Export-Csv -Path 'C:\Users\tommymaynard\Desktop\PesterStuff\PesterExport.xml' -NoTypeInformation

In the above example, we do a few things. We import the file we created out in AWS into a variable on my local computer, and then begin to iterate through its contents. For each entry, we add other object to my Object variable. Each object will contain a Describe, Context, It, Should, and Time property which is all obtained from the TestResult property of my PesterResults variable.

The last three lines are a few ways to handle the output (2 and 3 are commented out): (1) display it in the host, (2) display it in the host in an easier to read format, and (3) write the objects to CSV. Neat, right. As I continue to better learn Pester… I might just be back here. I can’t be 100% that this is the best way to save off and deal with the results of the Pester tests, but we’ll see!

Parse Net.exe Accounts (for Pester)

In a recent Pester test, I needed to verify that three settings in net.exe accounts were properly set. These included Lockout threshold, Lockout duration (minutes), and Lockout observation window (minutes). Well, now that I have my answer, I thought I would document it here. Before I show you how I handled this task, based on varying versions of PowerShell, I’ll show you the default output that I needed to parse.

PS > net.exe accounts
Force user logoff how long after time expires?:       Never
Minimum password age (days):                          0
Maximum password age (days):                          42
Minimum password length:                              0
Length of password history maintained:                None
Lockout threshold:                                    3
Lockout duration (minutes):                           30
Lockout observation window (minutes):                 30
Computer role:                                        SERVER
The command completed successfully.

I needed the above output to be parsed, and when that was done, I only needed the values of the three previously mentioned Lockout settings to be displayed. The below code indicates that if PowerShell is a version greater than 4.0 the ConvertFrom-String cmdlet can be used. It’s not necessary, but it was good to practice using a cmdlet I hardly every use. If the PowerShell version isn’t greater than 4.0, we’ll use a temporary variable and do the parsing ourselves. In the end and regardless of version, we’ll get our results. I’m using [PSCustomObject], but I am confident this test will never run with a version of PowerShell less than that of 3.0. This is happening in AWS with a Server 2012 R2 AMI and as we know, 2012 R2 includes PowerShell 4.0 by default.

If ($PSVersionTable.PSVersion.Major -gt 4) {
    $AcctSettings = net.exe accounts | ForEach-Object {
        ConvertFrom-String -InputObject $_ -Delimiter ': +' -PropertyNames Setting,Value
    }
} Else {
    $AcctSettings = net.exe accounts | ForEach-Object {
        $TempVar = $_ -split ': +'
        [PSCustomObject]@{Setting = $TempVar[0]; Value = $TempVar[1]}
    }
}
($AcctSettings | Where-Object {$_.Setting -eq 'Lockout threshold'}).Value
($AcctSettings | Where-Object {$_.Setting -eq 'Lockout duration (minutes)'}).Value
($AcctSettings | Where-Object {$_.Setting -eq 'Lockout observation window (minutes)'}).Value
3
30
30

This task was being done for Pester, so while we’re here, let me show it to you inside the Pester It Block.

# Account lockout policies.
It 'Checking the account lockout threshold, duration, observation window settings:' {
    If ($PSVersionTable.PSVersion.Major -gt 4) {
        $AcctSettings = net.exe accounts | ForEach-Object {
            ConvertFrom-String -InputObject $_ -Delimiter ': +' -PropertyNames Setting,Value
        }
    } Else {
        $AcctSettings = net.exe accounts | ForEach-Object {
            $TempVar = $_ -split ': +'
            [PSCustomObject]@{Setting = $TempVar[0]; Value = $TempVar[1]}
        }
    }
    ($AcctSettings | Where-Object {$_.Setting -eq 'Lockout threshold'}).Value | Should -Be 3
    ($AcctSettings | Where-Object {$_.Setting -eq 'Lockout duration (minutes)'}).Value | Should -Be 30
    ($AcctSettings | Where-Object {$_.Setting -eq 'Lockout observation window (minutes)'}).Value | Should -Be 30
} # End It.

That’s it! Now you can parse net.exe accounts, too!!

Parse Computer Name for Project Name

On some days… I just want full on project redo. It’s amazing how many decisions you’d make differently in those initial project meetings, once you’ve begun delivering results. Why, oh why, did we allow for hyphens in project names!?

Here’s my problem and how I fixed it. Let’s say we have four projects and their names are those listed below.

TRAILKING
ARF-SOIL
INTERALX
SECTRAIN

Now, let’s consider that we use these as our host names, or computer names; however, we append some information onto these four strings. For our terminal hosts, we’ll add -TH-01 and for our compute hosts, we add -CH-01. Therefore, we’d have two computer names for each project. For the TRAILKING and ARF-SOIL projects, we’d have the following four computers.

TRAILKING-TH-01, TRAILKING-CH-01, ARF-SOIL-TH-01, and ARF-SOIL-CH-01

Now, let’s consider we need to parse these computer names later on to help determine the project name. Can you see the problem, because I didn’t initially. The little extra coding I had to do is why we’re here today. You know, someone might need it one day, too.

If I split the full computer name at the first hyphen, and assume index 0 is the project name, then the ARF-SOIL project, is only going to be the ARF project. That’s not going to work. Take a look at my one-off solution. I hate these, but sometimes, it’s just too late to fix a project problem. Hindsight, man.

$String1 = 'TRAILKING-TH-01'
$String2 = 'TRAILKING-CH-01'
$String3 = 'ARF-SOIL-TH-01'
$String4 = 'ARF-SOIL-CH-01'
$String5 = '-ARF-SOIL-CH-01'

$String = Get-Random -InputObject $String1,$String2,$String3,$String4,$String5
$TempArray = $String.Split('-')

If ($TempArray.Count -eq 3) {
    $Project = $TempArray[0]

} ElseIf ($TempArray.Count -eq 4) {
    $Project = "$($TempArray[0])-$($TempArray[1])"

} Else {
    Write-Warning -Message "Unable to properly parse computer name: $String."
    Write-Verbose -Message "$BlockLocation Unable to properly parse computer name: $String."
}

$Project

As we repeatedly run this code in the ISE, or Visual Code, it’ll properly parse our computer names. If the string is split at its hyphens, and we’re left with three parts (TRAILKING, CH, and 01), then we know the first part is the project name. If the string is split at its hyphens, and we’re left with four parts (ARF, SOIL, TH, and 01), then we know the first two parts, combined with a hyphen, is the project name.

That was it. Happy Thanksgiving!

Determine if AWS EC2 Instance is in Test or Prod Account

It recently became apparent that I need a way to determine if I’m on an AWS TST (Test) EC2 instance, or a PRD (production) EC2 instance. The reason this is necessary is so that I can include a function to upload a file, or a folder, to an S3 bucket and ensure the portion of the bucket name that includes the environment, is included, and is correct. Therefore, I needed a function to be able to determine where it was running: in TST or PRD. Had I known I would need this information earlier, I would’ve had the CloudFormation template write this information to the Windows Registry or a flat file, so I could pick it up when needed. Had I considered an option such as this, I wouldn’t be wracking my brain now to try to determine a way to gather this information, when I didn’t leave it somewhere.

The computer names are the same in both environments (in both AWS accounts), so any comparison there doesn’t work. The TST servers and the PRD servers have the same name. After some thought, I came up with my three options:

1. Return the ARN from the metadata, and use that to determine whether I am on a TST or PRD EC2 instance, by extracting the AWS account number from the ARN and comparing it to two known values. This would allow me to determine which account I’m in—TST or PRD.

2. Look at the folders in C:\support\Logs. My functions, that are invoked inside the UserData section of a CloudFormation template, include logging that creates folders in this location named “TST” on a test EC2 instance and named “PRD” on a production EC2 instance.

3. Read in the UserScript.ps1 file (the UserData section in the CloudFormation template), and compare the number of the ‘tst’ and ‘prd’ strings in the file’s contents. Based on my UserData section, if there are more ‘tst’ strings, I’m on a TST EC2 instance, and if there are more ‘prd’ strings, I’m on a PRD EC2 instance.

Seriously Tommy, leave yourself some information on the system somewhere already. You never know when you might need it.

I decided I would use two of the three above options, providing myself a fallback if it were ever necessary. Who knows, I may not retire from my place of employment, and I’d like my code to continue to work as long as possible, even if I’m not around to fix it. I was worried about hard coding in those AWS account numbers, but less so, when I added a check against my UserScript.ps1 file (again, the UserData section of a CloudFormation document) for the count of ‘tst’ vs. ‘prd’ strings. Maybe it’ll never be used, but if needed, it’ll be there for me.

Here’s my first check in order to determine if my code is running on a TST or PRD EC2 instance. Feel free to look past my Write-Verbose statements. I used these for logging. So you have a touch of information, I’m creating the name of an S3 bucket, such as <projectname>-<environment>. Right now, we’re after the <environment> section, which I’m out to store in $Env. The <projectname> portion comes from parsing the EC2 instance’s assigned hostname.

#region Determine S3 Bucket using ARN and Account Number.
Write-Verbose -Message "$BlockLocation Determing the S3 Bucket Name using the ARN and account number."
$WebRequestResults = (Invoke-WebRequest -Uri 'http://169.254.169.254/latest/meta-data/iam/info' -Verbose:$false).Content
$InstanceProfileArn = (ConvertFrom-Json -InputObject $WebRequestResults).InstanceProfileArn
$AccountNumber = ($InstanceProfileArn.Split(':'))[4]
Write-Verbose -Message "$BlockLocation ARN: $InstanceProfileArn."
Write-Verbose -Message "$BlockLocation Account Number: $AccountNumber."`

If ($AccountNumber -eq '615613892375') {
    $Env = 'tst'
} ElseIf ($AccountNumber -eq '368125857028') {
    $Env = 'prd'
} Else {
    $Env = 'unknown'
    Write-Verbose -Message "$BlockLocation Unable to determine S3 Bucket Name using the ARN and account number."
} # End If.
#endregion

The following region’s code is wrapped in an If statement that will only fire if the $Env variable is equal to the string “unknown”. Notice that in the above code, $Env will be set to this value if neither of the AWS account numbers matches my hard coded values. No, those aren’t real AWS account numbers—well, not mine at least, and yes, they could’ve come in via function parameters.

#region If necessary, determine using 'tst' vs. 'prd' in UserScript.ps1.
If ($Env -eq 'unknown') {
    Write-Verbose -Message "$BlockLocation Determing S3 Bucket Name by comparing specific strings in the UserScript.ps1 file."
    $FileContent = Get-Content -Path "$env:ProgramFiles\Amazon\EC2ConfigService\Scripts\UserScript.ps1"
    $MatchesTst = Select-String -InputObject $FileContent -Pattern 'tst' -AllMatches
    $MatchesPrd = Select-String -InputObject $FileContent -Pattern 'prd' -AllMatches
    Write-Verbose -Message "$BlockLocation Comparing ."

    If ($MatchesTst.Matches.Count -gt $MatchesPrd.Matches.Count) {
        $Env = 'tst'
    } ElseIf ($MatchesTst.Matches.Count -lt $MatchesPrd.Matches.Count)  {
        $Env = 'prd'
    } Else {
        Write-Warning -Message "Unable to determine account number by comparing specific strings in the UserScript.ps1 file."
        Write-Verbose -Message "$BlockLocation Unable to determine account number by comparing specific strings in the UserScript.ps1 file."
    }
} # End If.
#endregion

If the above code fires, it will read in the contents of my UserScript.ps1 file. Again, this script file contains exactly what’s in the UserData section of my instance’s CloudFormation template. Once we have the file’s contents in a variable, we’ll scan it two times. On the first check, we’ll record the matches for the string ‘tst’. On the second check, we’ll record the matches for the string ‘prd’. The way I’ve written my UserData section, if there are more ‘tst’ strings, I’m on a TST EC2 instance, and if there are more ‘prd’ strings, I’m on a PRD EC2 instance. There’s a nested If statement that does this comparison and tells me which it is by assigning the proper value to that $Env variable.

So, after all that, I think I’ll just try to predetermine what information might be needed in the future and drop that into a flat-file or put it into the registry… just in case, it’s ever needed. This task was obnoxious but worthy of sharing.

Potentially Avoid Logging Plain Text Passwords

A few weeks ago I spoke at the Arizona PowerShell Saturday event where I introduced the newest version of my Advanced Function template — the 2.0 version. You can check it out here: http://tommymaynard.com/an-advanced-function-template-2-0-version-2017. It’s headed toward 200 downloads — not bad. Well, there’s going to need to be a newer version sooner rather than later, and here’s why.

The Advanced Function template writes out a few informational lines at the top of its log files. If you didn’t know, a big part of my template is the logging it performs. These lines indicate what, when, who, and from where, a function was invoked. In addition, it also logs out every parameter name and every parameter value, or values, that are included, as well. This was all very helpful until I pulled a password out of AWS’ EC2 parameter store — it’s secure — and fed it to the function with the logging enabled. It was a parameter value to a parameter named password, and it ended up in the clear, in a text file.

I know. I know. The function should only accept secure strings, and it will, but until then, I took a few minutes to write something I’ll be adding to the 2.1 version of my Advanced Function template. You see, I can’t always assume other people will use secure strings. My update will recognize if a parameter name has the word password, and if it does, it will replace its value in the logging with asterisks.

More or less, let’s start with the code that failed me. For each key-value pair in the $PSBoundParameters hash table, we write the key and its corresponding value to the screen.

Alright, now that we have our function in memory, let’s invoke the function and check out the results.

In the above results, my password is included on the screen. That means it could end up inside of an on disk file. We can’t have that.

Now, here’s the updated code concept I’ll likely add to my Advanced Function template. In this example, if the key includes the word password, then we’re going to replace its value with asterisks. The results of this function are further below.

In these results, our password parameter value isn’t written in plain text. With that, I guess I’ll need to add this protection. By the way, if you didn’t notice, Password is underlined in green in the first and third image. This is neat; it’s actually PSScriptAnalyzer recognizing that I should use a SecureString based on the fact that the parameter is named Password. I can’t predict what everyone will do, so what’s a couple more lines to potentially protect someone from storing something secure, in an insecure manner.