Category Archives: Quick Learn

Practical examples of PowerShell concepts gathered from specific projects, forum replies, or general PowerShell use.

Create Self-Signed Certificate and Export

Yesterday, I found myself walking through the usage of a couple of the cmdlets in the PKIClient (or PKI) module. Due to a project I’m working on, I may soon find myself needing and creating a self-signed certificate. Because I’m not yet ready to implement this code, I thought it made sense to use my blog to store it, until I am. While it’s for me, this could be helpful for you too.

$NewCertCreateParameters = @{
	Subject = 'powershell.functiontemplate.mydomain.com'
	CertStoreLocation = 'Cert:\CurrentUser\My'
	NotAfter = (Get-Date -Date 03/03/2021 -Hour 17).AddYears(10)
	KeyExportPolicy = 'Exportable'
	OutVariable = 'Certificate'
} # End.
New-SelfSignedCertificate @NewCertCreateParameters | Out-Null

In this first code section seen above, I used splatting. This allows me to create a hash table full of parameters and parameter values — key-value pairs*. Once created, it can be used — or splatted — as a part of a command’s invocation. In this instance, we’re splatting the hash table we created and stored in the $NewCertCreateParameters variable on the New-SelfSignedCertificate cmdlet. Noticed that we’re piping our command to Out-Null to keep the default output of this command from displaying. There’s still a way to see it, however.

This bit of PowerShell creates a new self-signed certificate on the computer, which is associated with the current user. Since we included the OutVariable parameter in our hash table, we have the data returned by our cmdlet invocation stored in the $Certificate variable, even though we used Out-Null. While the below output only shows three properties by default, there’s plenty more that can be reviewed by piping $Certificate to Select-Object -Property *.

[PS7.1.0] C:\> $Certificate

   PSParentPath: Microsoft.PowerShell.Security\Certificate::CurrentUser\My

Thumbprint                                Subject              EnhancedKeyUsageList
----------                                -------              --------------------
56DA4F187DF396CCCB67B5C93F6F0CA7848C5E66  CN=powershell.funct… {Client Authentication, Server Authentication}

Using the code in the second section, we can view our self-signed certificate after it’s been created by the previous command.

Get-ChildItem -Path $NewCertCreateParameters.CertStoreLocation |
	Where-Object Thumbprint -eq $Certificate.Thumbprint |
	Select-Object -Property *

The final section, has a few things happening. First, we create a secure string that holds a password. This isn’t a real password, so don’t bother. You get the idea, though, and really, for me, this is just about storing some code I may, or may not, use. The reason for the password is because we’re about to export the certificate to a Personal Information Exchange file — a PFX file, and we want it to be secure.

Second, we create a second parameter hash table and splat it onto the Export-PfxCertifcate cmdlet. Just as we did before, we pipe this command to Out-Null to suppress its output. Lastly, I’ve included three commands. The first one shows the .pfx file in the file system, the second one removes the .pfx file from the file system, and the third one removes the self-signed certificate from the system completely.

$CertPassword = ConvertTo-SecureString -String 'canoN Beach 44$09' -Force -AsPlainText

$NewCertExportParameters = @{
	Cert = "Cert:\CurrentUser\My\$($Certificate.Thumbprint)"
	FilePath = "$env:USERPROFILE\Documents\PowerShellFunctionTemplate.pfx"
	Password = $CertPassword
} # End.
Export-PfxCertificate @NewCertExportParameters | Out-Null

Get-Item -Path $NewCertExportParameters.FilePath
Remove-Item -Path $NewCertExportParameters.FilePath
Remove-Item -Path $NewCertExportParameters.Cert

And finally, here’s all the code in a single code block.

$NewCertCreateParameters = @{
	Subject = 'powershell.functiontemplate.arizona.edu'
	CertStoreLocation = 'Cert:\CurrentUser\My'
	NotAfter = (Get-Date -Date 03/03/2021 -Hour 17).AddYears(10)
	KeyExportPolicy = 'Exportable'
	OutVariable = 'Certificate'
} # End.
New-SelfSignedCertificate @NewCertCreateParameters | Out-Null

Get-ChildItem -Path $NewCertCreateParameters.CertStoreLocation |
	Where-Object Thumbprint -eq $Certificate.Thumbprint |
	Select-Object -Property *

$CertPassword = ConvertTo-SecureString -String 'canoN Beach 44$09' -Force -AsPlainText

$NewCertExportParameters = @{
	Cert = "Cert:\CurrentUser\My\$($Certificate.Thumbprint)"
	FilePath = "$env:USERPROFILE\Documents\PowerShellFunctionTemplate.pfx"
	Password = $CertPassword
} # End.
Export-PfxCertificate @NewCertExportParameters | Out-Null

Get-Item -Path $NewCertExportParameters.FilePath
Remove-Item -Path $NewCertExportParameters.FilePath
Remove-Item -Path $NewCertExportParameters.Cert

* Before closing, unless you got down here sooner, I wanted to mention a couple of the parameters I opted to use in the first command invocation. The New-SelfSignedCertificate cmdlet included the NotAfter and the KeyExportPolicy parameters, as shown below. The NotAfter parameter allowed us to include the preferred expiration of the self-signed certificate. Mine used March 3, 2031, as I added 10 years to my date value. If this parameter isn’t included, it will default to a one-year expiration. The KeyExportPolicy parameter allowed us to make the private key exportable. This is not a default, so it must be included if you suspect it’ll need to be exported, which is something we did.

...
	NotAfter = (Get-Date -Date 03/03/2021 -Hour 17).AddYears(10)
	KeyExportPolicy = 'Exportable'
...

Simple Simple Microsoft Crescendo Example

Edit: There’s a Part II now!

There’s a newer module that been discussed a few times in the written form, as well as in at least one podcast I listened to recently. Jason Helmick, an MVP turned Microsoft employee, has been notifying the PowerShell community about the Microsoft Crescendo PowerShell module. And he should be. It’s a unique idea for wrapping native system commands as PowerShell commands. I went looking for the easiest possible example and instead of finding that, I ended up here, writing about my first experience with the module.

Before I show you what I did, let me link a few posts about Crescendo. There was this one from Jason himself and a couple from Jim Truher: Part 1 and Part 2. Jim did the development of this module. At the time of this writing they aren’t taking PRs, but here’s the project on Github, too. And then somehow, I ended up watching this on YouTube with Jason and Jim.

The first, first thing I did was install the module from the PowerShell Gallery using the below command. I did that some time ago actually, but you know. I did, however, ensure that there wasn’t a newer version before beginning using Find-Module .

[PS7.1.0] C:\> Install-Module -Name Microsoft.PowerShell.Crescendo

The second, first thing I did was go to “C:\Users\tommymaynard\Documents\PowerShell\Modules\Microsoft.PowerShell.Crescendo\0.4.1\Samples” and copy and paste one of the JSON file examples. I renamed it to mstsc.Crescendo.json. I don’t believe this is the traditional way this is done, but… it was me experimenting with the module. The mstsc.exe executable is used for RDC or Remote Desktop Connection. If you’re like me, you probably call it RDP. I replaced everything in the file with what’s included below. I don’t recall which of the examples I copied from, but I removed one of the parameters, as that one had two and I was only interested in including one for now. Based on the structure of the below JSON you can get an idea of what’s happening.

{
    "$schema" : "./Microsoft.PowerShell.Crescendo.Schema.json",
    "Verb": "Connect",
    "Noun": "RemoteComputer",
    "OriginalName":"/Windows/System32/mstsc.exe",
    "Parameters": [
        {
            "Name": "ComputerName",
            "OriginalName": "/v",
            "ParameterType": "string"
        }
    ]
}

The schema file is used to ensure what’s entered into this JSON file, my mstsc.Crescendo.json file, is correct. The verb is, well the verb I wish to use. Make sure you use an approved verb. It checks against the Schema.json file for approved verb compliance. There’s a noun that’s needed, as well as the path to the native file we’re wrapping. After that is the single parameter I made available for use with this command. There’s plenty of mstsc switches, but I only ever use /v. Perhaps it’s an odd choice for testing, I don’t know, but it was the first to come to me for something simple, simple to try.

In the above JSON, and still in regards to the single parameter I included, I’ve used ComputerName for the parameter name which will stand in for /v. Additionally, I’ve indicated that the parameter type accepts a string. Therefore, if there’s a parameter value included with the parameter it should expect it’s a string value.

Once that portion was complete, I saved and closed my file and ran the Export-CrescendoModule command to create my module file — a .psm1 file. I didn’t see a way to avoid this, but this command will create the module file inside your current directory. Keep that in mind. I didn’t test with Out-File, but perhaps that would be an option for directing the output.

[PS7.1.0] C:\> Export-CrescendoModule -ConfigurationFile 'C:\Users\tommymaynard\Documents\PowerShell\Modules\Microsoft.PowerShell.Crescendo\0.4.1\Samples\mstsc.Crescendo.json' -ModuleName 'RemoteComputer.psm1'

Once the module file has been created, it’s time to import it and use it. Here’s my first go using my new module after my copy, paste, edit, and export method. Notice what happens when I don’t include the ComputerName parameter and value. It simply opens RDP with the last computer and user name. Helpful, but not exactly what I was after.

Here’s my second go at using the module’s Connect-RemoteComputer command. In this example, I included the ComputerName parameter and a value. As it’s not a computer that I’ve ever connected to, it’s prompting me to ensure I trust it. If you use this command with computers that you’ve already trusted, it’ll begin the RDP connection immediately. Perfect — just as I had expected.

A couple of things. This wasn’t a typical example. I think the idea behind the Crescendo module is to make command-line tools — like, strictly command-line tools — act more like PowerShell. I’ve been running mstsc from my command line for so long that it was one of the first command that came to mind. Also, I think this is going to be a Part I of at least one more post. I’d like to try another command — look at this list! Additionally, based on the other command names in the Crescendo module, there appears to be a better way to start a new project that doesn’t include copying and pasting a sample file. I’m going to do a little more experimentation and get back to this if I can. Working with the other cmdlets in the module hasn’t been as straightforward as I had hoped, but I’ll know more as the weekend progresses.

Edit: There’s a Part II now!

PowerShell and the LAPS Windows PowerShell Module

There’s a difference between Windows PowerShell and just PowerShell. You know that, right? You know what the difference is, yeah? If not, then do know that Jeff Hicks wrote a great blog post about it recently titled “What the Shell is Happening?” I’m going to assume you’ve read this as I go along, so you might just do that quickly if you haven’t already.

Windows PowerShell and PowerShell really are two different things. We’re moving into a territory where you’ll be able to know how much someone knows about Windows PowerShell/PowerShell based on how they are discussed. It’s going to be like PowerShell quoting rules: You can tell a good deal about someone’s Windows PowerShell/PowerShell knowledge based on when they do and don’t use single and double-quotes.

I was recently looking at some documentation when I happened upon the LAPS Windows PowerShell module. If you’re looking for a strangely named module then look no further; it’s AdmPwd.PS. Seriously. How about just calling it LAPS!? Up until that point in the documentation review I was doing, everything I tried was working in PowerShell — commands from the Hyper-V module, commands from the ActiveDirectory module (read the Edit section at the bottom of this post when done reading this post), and all of the we-ship-these-with-the-product Microsoft modules and commands.

The LAPS module — we’ll just call it that — is stored not only as a Windows PowerShell module, but it’s stored in the C:\Windows folder. That’s C:\Windows\System32\WindowsPowerShell\v1.0\Modules to be exact. While tab-completion didn’t work on the module’s name, the Import-Module cmdlet did work, but not without a warning.

The warning message above has to do with how PSRemoting sessions work. First off, a PSRemoting session is usually done from one computer to another. In this instance, however, it’s doing a PSRemoting session from and to the same computer: mine. In that remoting session, it’s loading up Windows PowerShell (powershell.exe [not pwsh.exe]). As can be seen below, running Get-PSSession will provide information about the Windows PowerShell (or WinPSCompatSession) PSRemoting session. Notice the -1 value for the IdleTimeout. Without any reading and research, I’m going to say that it means the session doesn’t timeout. I left mine up for an hour or so, and sure enough, the LAPS commands continued to work without having to recreate a PSRemoting session.

In any PSRemoting session, whether to the same computer or another, we get deserialized objects as the results. That means that the final results, that end up on the source computer from the destination computer, do not consist of live PowerShell objects. I’m sure you can read more about it, but the result/output is serialized into XML, sent across the wire from one computer to the other, and then deserialized. This has to do with how to quickly move data from one machine to another. In our case, even though it’s the same computer, it’s still making use of the network card and serialization/deserialization. It’s still doing this even if your PSRemoting session is to and from the same computer.

But there’s another way we can use this module. As mentioned in our warning message, Import-Module includes a SkipEditionCheck switch parameter. According to the documentation, this forces the command to skip evaluating CompatiblePSEditions in a module’s manifest file. This is a key within the file that indicates whether or not a module can be used for Windows PowerShell, which uses the term “Desktop” if it can, or PowerShell, which uses the term “Core” if it can. If a module was designed to work with both Windows PowerShell and PowerShell, it would include both terms. The LAPS module was written before CompatiblePSEditions was added to the module manifest files (.psd1 files).

When this parameter is used, it doesn’t look for CompatiblePSEditions, as stated, and appears to load the LAPS module properly. Well, properly enough that I was able to test the Get-AdmPwdPassword command. Before we see some proof supporting that claim, let’s take a look at some very important information from the Import-Module documentation.

It turns out it’s very likely this module and its related commands wouldn’t have worked using the SkipEditionCheck parameter, but so far they do. Oh, I didn’t mention this, but in the documentation for Import-Module and the SkipEditionCheck parameter, it does mention the path where the LAPS module is located, “Allows loading a module from the “$($env:windir)\System32\WindowsPowerShell\v1.0\Modules” module directory into PowerShell Core when that module does not specify Core in the CompatiblePSEditions manifest field.” So, a part of how this works is also due to the module’s location in the file system. Now, here’s our successful invocation of the Get-AdmPwdPassword command.

By using this method — and we’re lucky we can it seems — we skip creating a PSRemoting session from/to the same computer. Therefore, we aren’t forced to work with deserialized objects. Typically, it’s fine if you do, but with the LAPS module and PowerShell, it’s not a requirement. Take a look at the TypeName value above. That’s a clear indication that we are working with live PowerShell objects. We didn’t see this earlier in our PSRemoting session, but if we had, it would’ve said, “Deserialized.AdmPwd.PSTypes.PasswordInfo.”

Edit: A few days have passed since I first published this post. I didn’t know it then, but I do now! The ActiveDirectory module in PowerShell also creates a PSRemoting session! It looks like it’s running in PowerShell, but it’s really running in Windows PowerShell “behind the scenes.” I’ve had no problems dealing with a deserialized objects, by the way. This is because any piping I might do happens on the remote computer (my computer, but yeah), before the serialization/deserialization process.

Part V: Splunk, HEC, Indexer Acknowledgement, and PowerShell

In the last four parts of this series (Part I, Part II, Part III, Part IV),  we discussed sending telemetry data to Splunk using HEC (HTTP Event Collector). This requires no software to be installed. We can send data for ingestion to Splunk using REST and a REST endpoint. In the previous four parts, we’ve included Indexer Acknowledgement. This set up uses a random GUID we create and send to Splunk in our initial connection using Invoke-RestMethod.

Then, in additional requests we use the GUID to continually poll Splunk to determine if the data has been more than just received, but that it’s being processed. There was much too much delay for me to consider its use and so I disabled Indexer Acknowledgment. In this post, we’ll take our final code for Part IV, below, and remove all the parts that were in place for Indexer Acknowledgment. This greatly reduces the amount of code and overall complexity seen in the previous parts of this series. Compare the two code examples below as we wrap up the series. Hopefully, if you were looking for ways to send data to Splunk using PowerShell that you found this series of articles and in time that they were helpful for you. If you’ve got Splunk available to you, then don’t forget that you have a place where data can be sent and stored after it’s been collected with PowerShell. And it’s not just about data storage. In fact, that’s such a small portion of the benefits to Splunk. If you’re collecting good data, then there’s nothing you can’t find by searching the data.

With Indexer Acknowledgement

#region: Read .env file and create environment variables.
$FilterScript = {$_ -ne '' -and $_ -notmatch '^#'}
$Content = Get-Content -Path 'C:\Users\tommymaynard\Documents\_Repos\code\functiontemplate\random\telemetry\.env' | Where-Object -FilterScript $FilterScript
If ($Content) {
    Foreach ($Line in $Content) {
        $KVP = $Line -split '=',2; $Key = $KVP[0].Trim(); $Value = $KVP[1].Trim()
        Write-Verbose -Message "Adding an environment variable: `$env`:$Key."
        [Environment]::SetEnvironmentVariable($Key,$Value,'Process')
    } # End Foreach.
} Else {
    '$Content variable was not set. Do things without the file/the environment variables.'
}   # End If.
#endregion.
 
#region: Read clixml .xml file and obtain hashtable.
$HashTablePath = 'C:\Users\tommymaynard\Documents\_Repos\code\functiontemplate\random\telemetry\eventhashtable.xml'
$EventHashTable = Import-Clixml -Path $HashTablePath
#endregion.
 
#region: Create Splunk / Invoke-RestMethod variables.
$EventUri = $env:SplunkUrl + '/services/collector/event'
$AckUri = $env:SplunkUrl + '/services/collector/ack'
$ChannelIdentifier = (New-Guid).Guid
$Headers = @{Authorization = "Splunk $env:SplunkHECToken"; 'X-Splunk-Request-Channel' = $ChannelIdentifier}
$Body = ConvertTo-Json -InputObject $EventHashTable
$HttpRequestEventParams = @{URI = $EventUri; Method = 'POST'; Headers = $Headers; Body = $Body}
#endregion.
 
#region: Make requests to Splunk REST web services.
$Ack = Invoke-RestMethod @HttpRequestEventParams -StatusCodeVariable StatusCode -Verbose
$StatusCode
$AckBody = "{`"acks`": [$($Ack.ackId)]}"
$HttpRequestAckParams = @{URI = $AckUri; Method = 'POST'; Headers = $Headers; Body = $AckBody}
 
Measure-Command -Expression {
    Do {
        $AckResponse = Invoke-RestMethod @HttpRequestAckParams -Verbose
        $AckResponse.acks.0
        Start-Sleep -Seconds 30
    } Until ($AckResponse.acks.0 -eq $true)
} # End Measure-Command
#endregion.

Without Indexer Acknowledgement

#region: Read .env file and create environment variables.
$FilterScript = {$_ -ne '' -and $_ -notmatch '^#'}
$Content = Get-Content -Path 'C:\Users\tommymaynard\Documents\_Repos\code\functiontemplate\random\telemetry\.env' | Where-Object -FilterScript $FilterScript
If ($Content) {
    Foreach ($Line in $Content) {
        $KVP = $Line -split '=',2; $Key = $KVP[0].Trim(); $Value = $KVP[1].Trim()
        Write-Verbose -Message "Adding an environment variable: `$env`:$Key."
        [Environment]::SetEnvironmentVariable($Key,$Value,'Process')
    } # End Foreach.
} Else {
    '$Content variable was not set. Do things without the file/the environment variables.'
}   # End If.
#endregion.
 
#region: Read clixml .xml file and obtain hashtable.
$HashTablePath = 'C:\Users\tmaynard\Documents\_Repos\code\functiontemplate\random\telemetry\eventhashtable.xml'
$EventHashTable = Import-Clixml -Path $HashTablePath
#endregion.
 
#region: Create Splunk / Invoke-RestMethod variables.
$EventUri = $env:SplunkUrl + '/services/collector/event'
$Headers = @{Authorization = "Splunk $env:SplunkHECToken"}
$Body = ConvertTo-Json -InputObject $EventHashTable
$HttpRequestEventParams = @{URI = $EventUri; Method = 'POST'; Headers = $Headers; Body = $Body}
#endregion.
 
#region: Make requests to Splunk REST web services.
Invoke-RestMethod @HttpRequestEventParams -StatusCodeVariable StatusCode -Verbose
$StatusCode

Part IV: Splunk, HEC, Indexer Acknowledgement, and PowerShell

And now, Part IV! Like I’ve mentioned previously, please do yourself a favor and read the previous parts to this series of posts: Part I, Part II, and Part III. This won’t make sense without it. It may not make sense with it, and if that turns out to be true, then let me know by leaving a comment, or reaching me on Twitter. I know it’s possible that Splunk feels like a foreign language. What I know, I learned over the course of a few weeks, and it doesn’t feel like much. I probably shouldn’t be surprised that there’s so much to cover; so much was learned. Maybe you’ll have to do it too: I read the same articles over and over and over. I did have some help from a colleague, but if you don’t have that, then you can consider it me, up to a point,

The next thing we’ll look at is our first POST request to Splunk. This is us, sending in our JSON payload. Notice our first below Invoke-RestMethod command. First, it includes our $HttpRequestEventParams parameter hash table. This includes the Uri, Method, Headers, and the Body parameter and their corresponding parameter values. Here’s a reminder of what’s in that parameter hash table.

URI = @{$EventUri; Method = 'POST'; Headers = $Headers; Body = $Body}

As a part of this Invoke-RestMethod command, we also included the StatusCodeVariable parameter. When our command completes, it will have created the $StatusCode variable which should contain a 200 response, if our message was received by Splunk. Additionally, we have this command writing any output of the command (from Splunk) into the $Ack variable.

#region: Make requests to Splunk REST web services.
$Ack = Invoke-RestMethod @HttpRequestEventParams -StatusCodeVariable StatusCode -Verbose 
$StatusCode
$AckBody = "{`"acks`": [$($Ack.ackId)]}"
$HttpRequestAckParams = @{URI = $AckUri; Method = 'POST'; Headers = $Headers; Body = $AckBody}

Measure-Command -Expression {
    Do {
        $AckResponse = Invoke-RestMethod @HttpRequestAckParams -Verbose
        $AckResponse.acks.0
        Start-Sleep -Seconds 30
    } Until ($AckResponse.acks.0 -eq $true)
} # End Measure-Command
#endregion.

Keep in mind here, that if we weren’t using indexer acknowledgment, we’d probably be done. But now, we’ve got to create a second POST request to Splunk, so we can determine when the data that we know was received (200 response), is headed into the Splunk pipeline for processing. It’s a guarantee of data ingestion (not indigestion). But, since we’re going along down the indexer acknowledgment path (for now and only now), let’s walk through what else is happening here.

First, we create $AckBody. It uses the PowerShell object in $Ack and turns it back into JSON. Invoke-RestMethod has this helpful feature of turning JSON into a PowerShell object, so we’ve got to reverse that so we can send it back to Splunk. Once $AckBody is done, we’ll use it as a parameter value in $HttpRequestAckParams. About $AckBody, be sure to read this page (it’s been linked before), under the “Query for indexing status” section. Splunk sends us an ack ID and then we need to send it back with the same headers as in the first Invoke-RestMethod request. Remember, this includes the HEC token (pulled out of our .env file forever ago), and the channel identifier we created as a random GUID.

Following? Again, reach out to me if you need this information and I’m failing at getting it across to you. I’m not too big to help/rewrite/clarify. I get it, this has the potential to make someone launch their laptop out of a second-story window. Luckily, I work on the first floor of my house. Also, print out the Splunk articles I’ve linked and make them a home on the drink table next to your couch (like I did).

Okay, we’re on the last step. Remember, we’re still working as those indexer acknowledgement is enabled. This whole second POST request is irrelevant if you’re not going to use it. As is the channel identifier. Again, I’ll post modified code just as soon as my indexer acknowledgment is disabled.

Measure-Command -Expression {
    Do {
        $AckResponse = Invoke-RestMethod @HttpRequestAckParams -Verbose
        $AckResponse.acks.0
        Start-Sleep -Seconds 30
    } Until ($AckResponse.acks.0 -eq $true)
} # End Measure-Command

I mentioned that I’m not going to be using indexer acknowledgment because of the time it takes; I simply don’t have that. I’m in the business of automating. I record the duration of every function I invoke. It isn’t going to work for me. Anyway, I have my second Invoke-RestMethod request inside a Do-Until loop. This loop continues to run the command every 30 seconds until I get a $true response from Splunk (that means that Splunk is for sure going to index my data). For fun, that’s inside of a Measure-Command Expression parameter. This, is how I determined it took too much time for me. Below I’ll include the entire code as one block from all three posts. In the fifth post, yet to be written, I’ll include the entire code as one block, too. And remember, that post won’t have any requirements on an enabled indexer acknowledgment, the second POST request, the channel identifier, etc.

Thank you for hanging in for this one. Hopefully, it proves helpful for someone. Oh, that reminds me. There were two Splunk functions written to do all this when I started looking around. Maybe you found them too. They made little sense to me until I really took the time to learn what’s happening. Now that you’ve read this series, read over those functions; see what you understand about them now that you wouldn’t have been able to before. It might be much more than you thought it would be.

@torggler https://www.powershellgallery.com/packages/Send-SplunkEvent

@halr9000 https://gist.github.com/halr9000/d7bce26533db7bca1746

#region: Read .env file and create environment variables.
$FilterScript = {$_ -ne '' -and $_ -notmatch '^#'}
$Content = Get-Content -Path 'C:\Users\tommymaynard\Documents\_Repos\code\functiontemplate\random\telemetry\.env' | Where-Object -FilterScript $FilterScript
If ($Content) {
	Foreach ($Line in $Content) {
		$KVP = $Line -split '=',2; $Key = $KVP[0].Trim(); $Value = $KVP[1].Trim()
		Write-Verbose -Message "Adding an environment variable: `$env`:$Key."
		[Environment]::SetEnvironmentVariable($Key,$Value,'Process')
	} # End Foreach.
} Else {
	'$Content variable was not set. Do things without the file/the environment variables.'
}	# End If.
#endregion.

#region: Read clixml .xml file and obtain hashtable.
$HashTablePath = 'C:\Users\tommymaynard\Documents\_Repos\code\functiontemplate\random\telemetry\eventhashtable.xml'
$EventHashTable = Import-Clixml -Path $HashTablePath
#endregion.

#region: Create Splunk / Invoke-RestMethod variables.
$EventUri = $env:SplunkUrl + '/services/collector/event'
$AckUri = $env:SplunkUrl + '/services/collector/ack'
$ChannelIdentifier = (New-Guid).Guid
$Headers = @{Authorization = "Splunk $env:SplunkHECToken"; 'X-Splunk-Request-Channel' = $ChannelIdentifier}
$Body = ConvertTo-Json -InputObject $EventHashTable
$HttpRequestEventParams = @{URI = $EventUri; Method = 'POST'; Headers = $Headers; Body = $Body}
#endregion.

#region: Make requests to Splunk REST web services.
$Ack = Invoke-RestMethod @HttpRequestEventParams -StatusCodeVariable StatusCode -Verbose 
$StatusCode
$AckBody = "{`"acks`": [$($Ack.ackId)]}"
$HttpRequestAckParams = @{URI = $AckUri; Method = 'POST'; Headers = $Headers; Body = $AckBody}

Measure-Command -Expression {
	Do {
		$AckResponse = Invoke-RestMethod @HttpRequestAckParams -Verbose
		$AckResponse.acks.0
		Start-Sleep -Seconds 30
	} Until ($AckResponse.acks.0 -eq $true)
} # End Measure-Command
#endregion.

Hash Table in Hash Table to JSON

Edit: It turns out that I did in fact go over nesting a hash table inside a hash table in Part II of my Splunk series. There’s still some likable and solid content in this post though.

It’s how it works. A single topic, or idea, or even a real live project, can lead to additional writing and posting. As many might recognize, I use my blog for at least two things. One, it’s a place for you and others to come and potentially learn something new. Or maybe it’s just to reinforce a concept. I do my best to make things quick and clear. Two, it’s. for. me. Sometimes I share something simply because I need a place to store it for my own reference. Every post I’ve written — and I’m getting close to 350 of them — serves both purposes. This one certainly does, too.

If you’ve been paying attention, you know I’m currently working with my function template (which I write as FunctionTemplate at work), gathering telemetry data, and posting that to Splunk via a REST endpoint. It’s been fascinating. I had wanted an opportunity to work with Splunk and lucky for me a colleague mentioned it, even though I was preparing to work with AWS. I’m grateful they did!

A part of working with Splunk and REST and HEC requires that a payload be sent in as JSON. Luckily, PowerShell includes a command to convert a hash table to JSON. As a part of this project, I’ve converted numerous strings and even an array to JSON. Take a look.

I got to thinking that I want my telemetry code to acquire the city, state, and country via the public IP address and a geolocation API. Although it started this way, I decided I didn’t want single strings in my JSON for each of the properties.

Therefore, I needed to create a hash table of the data (within the parent hash table), and then convert that to JSON. Yes, you heard that correctly, we’re going to nest a hash table inside of another hash table and convert that to JSON. You may remember something about this in the Splunk series. Well, we cover it all again, and all because I want my Location data inside a hash table and not as three single strings. In the end, the below explanations — most of it anyway — will get us to this image.

Let’s pretend my public IP address is 8.8.8.8. Much of the hey-sign-up text in the below response won’t be there if you’re using your own public IP address, as opposed to one of Google’s. I’d still recommend you take a look at the ipapi.co website and do your own due diligence regarding the API.

Once I knew how to obtain my geolocation data via ipapi.co, I could use the below code. In the code I create three hash tables:

  • TelemetryHash
  • TelemetryHashStandard
  • TelemetryHashLocation

The TelemetryHashStandard hash table holds two key-value pairs: DateTime and Directory (as in our location within the operating system). These aren’t vital for more than the inclusion of a couple of standard entries in the parent hash table. The TelemetryHashLocation hash table holds three key-value pairs: City, Region, and Country.

Once the values are obtained and stored in one of the two hash tables, we store TelemetryHashStandard in TelemetryHash. Then we add our TelemetryHashLocation hash table as a single key-value pair to the TelemetryHash hash table. Now that you’ve gotten through that reading, be sure to review the below code.

Remove-Variable -Name TelemetryHash,TelemetryHashStandard,TelemetryHashLocation -ErrorAction SilentlyContinue
New-Variable -Name TelemetryHash -Value @{}
New-Variable -Name TelemetryHashStandard -Value @{}
New-Variable -Name TelemetryHashLocation -Value @{}

$TelemetryHashStandard.Add('DateTime',"$(Get-Date)")
$TelemetryHashStandard.Add('Directory',"$((Get-Location).Path)")

$Location = Invoke-RestMethod -Uri "https://ipapi.co/8.8.8.8/json"
$TelemetryHashLocation.Add('City',$Location.city)
$TelemetryHashLocation.Add('Region',$Location.region)
$TelemetryHashLocation.Add('Country',$Location.country_name)

$TelemetryHash = $TelemetryHashStandard
$TelemetryHash.Add('Location',$TelemetryHashLocation)
$TelemetryHash | ConvertTo-Json

{
  "Location": {
    "Region": "California",    
    "Country": "United States",
    "City": "Mountain View"    
  },
  "Directory": "C:\\",
  "DateTime": "01/13/2021 19:33:14"    
}

See that; just above? There are the two single strings — DateTime and Directory — as well as the single Location hash table and its nested keys and values. And, just for fun, here’s another example. Here we create two hash tables: one for parents and one for children. Once they’re both populated, we add the children to the parents’ hash table. Like we did above, we could’ve gotten everything into a single hash table that was neither the parent’s or children’s hash table. All this to say, there’s no requirement for a third and final hash table needed prior to the JSON export.

[PS7.1.0] C:\> $Parent = @{}; $Child = @{}
[PS7.1.0] C:\> $Child.Add('Son','CM')
[PS7.1.0] C:\> $Child.Add('Daughter','AM')
[PS7.1.0] C:\> $Child

Name                           Value
----                           -----
Son                            CM
Daughter                       AM

[PS7.1.0] C:\> $Parent.Add('Father','Tommy')
[PS7.1.0] C:\> $Parent.Add('Mother','JM')
[PS7.1.0] C:\> $Parent

Name                           Value
----                           -----
Mother                         JM
Father                         Tommy

[PS7.1.0] C:\> $Parent.Add('Children',$Child)
[PS7.1.0] C:\> $Parent

Name                           Value
----                           -----
Children                       {Son, Daughter}
Father                         Tommy
Mother                         JM

[PS7.1.0] C:\> $Parent.Children

Name                           Value
----                           -----
Son                            CM
Daughter                       AM

[PS7.1.0] C:\> $Parent | ConvertTo-Json
{
  "Children": {
    "Son": "CM",
    "Daughter": "AM"
  },
  "Father": "Tommy",
  "Mother": "JM"
}

By nesting a hash table inside of another hash table, we can convert to JSON and maintain the data’s original structure. Add arrays and hash tables to the hash table you intend to convert/export to JSON, and they’ll be there just as you expect them to be.

Saving Time with Background Jobs

If you’re like me, there’s something you know a decent amount about regarding PowerShell, but you just don’t get to use it much. Today, it’s PowerShell background jobs. If you’ve been reading my blog currently, then you know I’m right in the middle of a series regarding Splunk. In the series, I’m sending telemetry data from my function template to Splunk. The problem, although slight, is that it’s increased the duration, or length of time, the function takes to complete. No surprise. It’s running several additional commands where it polls the user and system for information. It’s only adding maybe a second more of time to the duration of the function. Still, why not determine if it’s time that can be reclaimed. Enter background jobs.

If I can collect my telemetry data in the background, while the function is doing whatever it’s supposed to be doing, then I can potentially remove any additional time added to the invocation of the function due to collecting telemetry data. Let’s take a look a couple code examples to begin.

This first function is called Start-SleepWITHOUTBackgroundJob. Notice the “without” in the name. This function will run Start-Sleep twice within the function: once for five seconds and then once for three seconds. Therefore, we’d expect the function to take around eight seconds to complete. The five second section is standing in for where we’d run our standard function code. The three second section is standing in for where we’d collect our telemetry data.

Function Start-SleepWITHOUTBackgroundJob {
    Start-Sleep -Seconds 5

    Start-Sleep -Seconds 3
} # End Function: Start-SleepWITHOUTBackgroundJob.

Measure-Command -Expression {
    Start-SleepWITHOUTBackgroundJob
}

Let’s run it a few times. As you’ll see, and just as we’d suspected, it comes in at right around the 8 second mark. If you’ve seen the output of Measure-Command then you can tell I’ve removed several of the time properties; they weren’t necessary.

Seconds           : 8
Milliseconds      : 16

Seconds           : 8
Milliseconds      : 26

Seconds           : 8
Milliseconds      : 22

The second function is called Start-SleepWITHBackgroundJob. We’ve swapped our Start-Sleep commands because we want what takes less time to happen first. It has to be what happens inside the background job. I suspect that gathering telemetry data is most always going to take less time than whatever else the function is doing. That may not always be the case, but it’s a safe choice.

Function Start-SleepWITHBackgroundJob {
    Start-Job -ScriptBlock {
        Start-Sleep -Seconds 3
    } | Out-Null

    Start-Sleep -Seconds 5
} # End Function: Start-SleepWITHBackgroundJob.

Get-Job | Remove-Job
Measure-Command -Expression {
    Start-SleepWITHBackgroundJob
}

And, look at that. We’ve shaved off three seconds from our function invocation by placing those three seconds inside of a background job. Our three seconds are running in a separate PowerShell process that executes at the same time the function sleeps for five seconds. This is going to work great for me.

Seconds           : 5
Milliseconds      : 596

Seconds           : 5
Milliseconds      : 795

Seconds           : 5  
Milliseconds      : 417

Now that we’ve proved we can use PowerShell background jobs to save time and avoid some unnecessary top-to-bottom/sequential programming, let’s do this while actually gathering some telemetry data. We’ll do two things at once and shave off some time from the overall time. The time difference may not be as dramatic as the above examples, but I’ll take anything. In fact, watch this first.

Do you see the delay? There’s a moment where my telemetry data is being gathered and sent to Splunk, before the prompt reappears. The idea is to get those milliseconds back — they add up!

As you can see below, we have another code example. This will run without a background job. It’ll sleep for five seconds (as thought it’s fulfilling its purpose), and then collect some telemetry data and display that on the screen. I’ll share the code in between each of the below regions at the end of this post in case someone finds themself interested.

Function Start-SleepWITHOUTBackgroundJob {
    Start-Sleep -Seconds 5

    #region: Obtain telemetry.
	New-Variable -Name FuncTmplHash -Value @{}
	New-Variable -Name TelemetryHashBonus -Value @{}
        #region: Determine PowerShell version.
        #endregion.
        #region: Check for other version: Windows PowerShell|PowerShell.
        #endregion.
        #region: Determine IP address(es).
        #endregion.
        #region: Determine Operating System.
        #endregion.
        #region: Determine computer tier.
        #endregion.
    $TelemetryHashBonus
    #endregion.
} # End Function: Start-SleepWITHOUTBackgroundJob.

Measure-Command -Expression {
    Start-SleepWITHOUTBackgroundJob | Out-Default
}

While the time difference isn’t too dramatic (roughly 750 milliseconds), it’s something. Something of which I want to partially reclaim. This is exactly why you see the hesitation/pause before PowerShell rewrites a fresh prompt in the above GIF. Now, let’s get this corrected.

Function Start-SleepWITHBackgroundJob {
    Start-Job -ScriptBlock {
        #region: Obtain telemetry.
        New-Variable -Name FuncTmplHash -Value @{}
        New-Variable -Name TelemetryHashBonus -Value @{}
        #region: Determine PowerShell version.
        #endregion.
        #region: Check for other version: Windows PowerShell|PowerShell.
        #endregion.
        #region: Determine IP address(es).
        #endregion.
        #region: Determine Operating System.
        #endregion.
        #region: Determine computer tier.
        #endregion.
        $TelemetryHashBonus
        #endregion.
     } -OutVariable Job | Out-Null

    Start-Sleep -Seconds 5
    Receive-Job -Id $Job.Id
} # End Function: Start-SleepWITHBackgroundJob.

Measure-Command -Expression {
    Start-SleepWITHBackgroundJob | Out-Default
}

If we take a look a the below results versus the run without the background job we can see that we’ve saved roughly 500 milliseconds, or a 1/2 a second. That’s not much; I’d agree, even though it feels like an eternity when I’m waiting for my prompt to be rewritten. I guess I should consider that this isn’t the full telemetry gathering code I use. Still, for every two invocations, I save a single second. One hundred and twenty invocations saves me a minute. If my tools are far reaching, then there’s definitely time to be saved.

It does take time to create the job and receive its data once it’s complete, so perhaps that’s eating into my return on time, as well. That makes me think of one more thing worth sharing. If you find yourself interested in implementing something like this, then it’s probably wise to not assume the background job is complete, as I’ve done in these examples. Instead of running Receive-Job, first run Get-Job and ensure your job’s State property is “Completed,” and not still “Running.” It would probably be best to put this inside a Do-Until language construct, so it can loop until you can be certain the job is completed, before receiving its data.

I said I share the telemetry gathering code, so that’s been included below. I make no guarantees that it’ll work or make sense for you, but there it is.

#region: Obtain telemetry.
New-Variable -Name FuncTmplHash -Value @{}
New-Variable -Name TelemetryHashBonus -Value @{}

#region: Determine PowerShell version.
$FuncTmplHash.Add('PSVersion',"$(If ($PSVersionTable.PSVersion.Major -lt 6) {"Windows PowerShell $($PSVersionTable.PSVersion.ToString())"} Else {
	"PowerShell $($PSVersionTable.PSVersion.ToString())"})")
$TelemetryHashBonus.Add('PSVersion',$FuncTmplHash.PSVersion)
#endregion.

#region: Check for other version: Windows PowerShell|PowerShell.
If ($FuncTmplHash.PSVersion -like 'PowerShell*') {
	$TelemetryHashBonus.Add('PSVersionAdditional',
		"$(try {powershell.exe -NoLogo -NoProfile -Command {"Windows PowerShell $($PSVersionTable.PSVersion.ToString())"}} catch {})")
} ElseIf ($FuncTmplHash.PSVersion -like 'Windows PowerShell*') {
	$TelemetryHashBonus.Add('PSVersionAdditional',
		"$(try {pwsh.exe -NoLogo -NoProfile -Command {"PowerShell $($PSVersionTable.PSVersion.ToString())"}} catch {})")
} # End If-Else.
#endregion.

#region: Determine IP address(es).
$ProgressPreference = 'SilentlyContinue'
$TelemetryHashBonus.Add('IPAddress',(Invoke-WebRequest -Uri 'http://checkip.dyndns.com' -Verbose:$false).Content -replace "[^\d\.]")
$TelemetryHashBonus.Add('IPAddressAdditional',@(Get-NetIPAddress | Where-Object -Property AddressFamily -eq 'IPv4' |
	Where-Object -FilterScript {$_ -notlike '169.*' -and $_ -notlike '127.*'}).IPAddress)
$ProgressPreference = 'Continue'
#endregion.

#region: Determine Operating System.
If ($FuncTmplHash.PSVersion -like 'Windows PowerShell*' -and $FuncTmplHash.PSVersion.Split(' ')[-1] -lt 6) {
	$TelemetryHashBonus.Add('OperatingSystem',"Microsoft Windows $((Get-CimInstance -ClassName Win32_OperatingSystem -Verbose:$false).Version)")
	$TelemetryHashBonus.Add('OperatingSystemPlatform','Win32NT') 
} Else {$TelemetryHashBonus.Add('OperatingSystem',"$($PSVersionTable.OS)")
	$TelemetryHashBonus.Add('OperatingSystemPlatform',"$($PSVersionTable.Platform)")} # End If-Else.
#endregion.

#region: Determine computer tier.
Switch ($FuncTmplHash.'Domain\Computer') {{$_ -like '*-PT0-*'} {$TelemetryHashBonus.Add('ComputerTier','T0'); break} 
{$_ -like '*-PT1-*'} {$TelemetryHashBonus.Add('ComputerTier','T1'); break}
default {$TelemetryHashBonus.Add('ComputerTier','Unknown')}} # End Switch.
#endregion.
$TelemetryHashBonus
#endregion.

Part III: Splunk, HEC, Indexer Acknowledgement, and PowerShell

This is part three of a continuation of posts on Splunk, HEC, and sending data using PowerShell to Splunk for consumption. Make sure you’ve read the first two parts (Part I and Part II), as I’m going to assume you have!

Now that we have our data ready to be sent to Splunk in our $EventHashTable variable, we need to create some URLs using the environment variables we created in Part I. The first two lines create two URIs. In the first one, $EventUri, combines the value stored in $env:SplunkUrl with the string ‘/services/collector/event’. The second one, $AckUri, combines the same $env:SplunkUrl with the string ‘/services/collector/ack’. When completed, these two URIs are two different Splunk REST endpoints.

#region: Create Splunk / Invoke-RestMethod variables.
$EventUri = $env:SplunkUrl + '/services/collector/event'
$AckUri = $env:SplunkUrl + '/services/collector/ack'
$ChannelIdentifier = (New-Guid).Guid
$Headers = @{Authorization = "Splunk $env:SplunkHECToken"; 'X-Splunk-Request-Channel' = $ChannelIdentifier}
$Body = ConvertTo-Json -InputObject $EventHashTable
$HttpRequestEventParams = @{URI = $EventUri; Method = 'POST'; Headers = $Headers; Body = $Body}
#endregion.

We’re only going to need the second URI — $AckUri (see the above code) — if our HEC token requires indexer acknowledgment. For now, we’ll assume that it does. Beneath the two lines that create and assign the $EventUri and $AckUri variable is the creation of the $ChannelIdentifier variable. As the HEC has indexer acknowledgment enabled (a checkmark in the GUI), you’re required to make more than one POST request to Splunk. You send your first one, which we’ll see momentarily, and then you send a second one, as well. The first one gets your data to Splunk. The second (or third, or fourth, or fifth, etc.) lets you know if your data is being processed by Splunk. It may take more than just a second POST request to get back an acknowledgment. It’s a nice feature — who wouldn’t want to know!? Not only do you get your 200 status response letting you know Splunk received your data in the first request, but you can also know when Splunk is actually doing something with it!

I loved the idea at first; however, I was quick to discover it was taking anywhere from two to four, and sometimes five, minutes to return “true.” Once it returned true, I could rest assure that the data was being indexed by Splunk. Nice feature, but for me, that’s too much time to spend waiting. If I get a 200 back from my first POST, then I’d rather just carry on without concerning myself with whether or not my data actually hit the Splunk processing pipeline. From Part I through here, we’re going to continue as though you want to use indexer acknowledgment. Toward the end, we’ll see the code changes I implement to remove it.

I should say this, however, it’s in here, but the idea behind indexer acknowledgment is not about security. It’s about, and I quote, “…to prevent a fast client from impeding the performance of a slow client.” You see, index acknowledgment slows things down. So much so, I couldn’t use it. Lovely idea, but not for my purposes. It also uses channels, as you might’ve gathered from our $ChannelIdentifier variable. To create a channel, you create a random GUID and send that in with your initial POST request. Then you use the same channel identifier to obtain the acknowledgment. As I write this and read this, I realize that you really ought to read through the document I linked above. Here it is again: https://docs.splunk.com/Documentation/Splunk/8.1.0/Data/AboutHECIDXAck. It’s going to do this process more justice than I have. That said, I feel confident that what I’ve had to say might actually prove helpful when combined with the Splunk documentation.

After our $ChannelIdentifier variable is created and assigned, we’re going to create a $Headers (hash table) variable. It will contain two headers: one for authorization that contains our HEC token and one for our channel identifier used for indexer acknowledgment. The below code has been copied from the above code example.

$Headers = @{
    Authorization = "Splunk $env:SplunkHECToken"
    'X-Splunk-Request-Channel' = $ChannelIdentifier
}

Following that variable creation, we’re going to create two more. One will be our $Body variable and the other a hash table of parameters and their values that we’ll end up including with Invoke-RestMethod in Part IV. As a part of creating the $Body variable, we’re going to take our Hash table (with its nested hash table) and convert it all to JSON. This is what we want to send to Splunk. The below code has also been copied from the above code example.

$Body = ConvertTo-Json -InputObject $EventHashTable
$HttpRequestEventParams = @{
    URI = $EventUri
    Method = 'POST'
    Headers = $Headers
    Body = $Body
}

I think it’s important to see the JSON representation of our $EventHashTable variable because you’ll end up seeing this in the Splunk documentation. Notice the placement of the source, host, and time event metadata versus the (nested) event data.

$Body = Convertto-Json -InputObject $EventHashTable
$EventHashTable
{
  "source": "PowerShellFunctionTemplate",
  "host": "mainpc.tommymaynard.com\\TOMMYCOMPUTER",
  "event": {
    "OperatingSystemPlatform": "Win32NT",
    "PSHostProgram": "ConsoleHost",
    "PSVersionActive": "PowerShell 7.0.1"
  },
  "time": 1608148655,
  "sourcetype": "Telemetry"
}

The final line in our code example is the creation of a parameter hash table we’ll use — in a later part — to send our data to Splunk using Invoke-RestMethod.

$HttpRequestEventParams = @{
    URI = $EventUri
    Method = 'POST'
    Headers = $Headers
    Body = $Body
}

Until next time. … Part IV is now available.

Part II: Splunk, HEC, Indexer Acknowledgement, and PowerShell

In the first part of this series, we discussed using a .env file to obtain a Splunk HEC token and our Splunk URL. These two pieces of information are going to need to make their way into our POST request to Splunk. Be sure to read Part I if you haven’t already.

The next thing we need to do is pull in a hash table from the disk. The reason I’ve done it this way is to ensure I had a hash table to use without having to create it. It was a quicker way to work with my code, as I was still in the development and testing phase. In production, the hash table would be created not from the disk/stale data, but from real-time data collection.

#region: Read clixml .xml file and obtain hashtable.
$HashTablePath = 'C:\Users\tommymaynard\Documents\_Repos\code\functiontemplate\random\telemetry\eventhashtable.xml'
$EventHashTable = Import-Clixml -Path $HashTablePath
#endregion.

The above code does two things: One, it sets the $HashTablePath variable to an XML file. This file was created previously using Export-Clixml. The second thing it does is import the XML file using Import-Clixml. So, what’s in the file and what are we doing here? As we’ll see later, we’re going to use a hash table—an in-memory data structure to store key-value pairs—and get that converted (to something else) and sent to Splunk.

This isn’t just any hash table, so let’s take a look at what’s stored in the $EventHashTable variable once these two lines of code are complete.

$EventHashTable
Name                           Value
----                           -----
event                          {OperatingSystemPlatform, IPAddressPublic, Command:0, PSVersionOther…}
sourcetype                     Telemetry
host                           mainpc.tommymaynard.com\TOMMYCOMPUTER
time                           1608148655
source                         PowerShellFunctionTemplate

If you’ve worked with PowerShell hash tables before, then you’ll probably recognize that there’s a nested hash table inside this hash table. That, or at minimum, that one of the values doesn’t seem right. This nested hash table is stored as the value for the event key. The four other keys — host, time, source, and sourcetype — are standard key-value pairs that have values that are not nested hash tables. Let’s take a look at the key-value pairs inside the nested hash table. We can use dotted notation to view the values stored in each key.

$EventHashTable.host
mainpc.tommymaynard.com\TOMMYCOMPUTER
$EventHashTable.time; $EventHashTable.source
1608148655
PowerShellFunctionTemplate
$EventHashTable.sourcetype
Telemetry

Now, let’s view the key-value pairs stored inside the nested hash table. Then we’ll run through how we’re able to nest a hash table inside another hash table before it was saved to disk using Export-Clixml.

 $EventHashTable.event  
Name                           Value
----                           -----
OperatingSystemPlatform        Win32NT
IPAddress                      64.204.192.66
Command:0:CommandName          New-FunctionTemplate
PSVersionAdditional            Windows PowerShell 5.1.19041.610
DateTime                       Wed. 12/16/2020 12:57:35 PM     
Command:0:Parameter:1          PassThru:True
Template                       FunctionTemplate 3.4.7
Command:0:CommandType          Function
IPAddressAdditional            192.168.86.127
Command:0:Parameter:0          Log:ToScreen
Duration                       0:00:00:00.9080064
Domain\Username                mainpc.tommymaynard.com\tommymaynard
CommandName                    New-FunctionTemplate
ModuleName
Domain\Computer                mainpc.tommymaynard.com\TOMMYCOMPUTER
OperatingSystem                Microsoft Windows 10.0.19041
PSVersion                      PowerShell 7.1.0
PSHostProgram                  ConsoleHost

Look at all that data! It’s beautiful (although not completely accurate, so that it may be posted here). When it’s not pulled from the disk/stale data, such as in this example, it’s being gathered by a function during its invocation, and then it’s sent off to Splunk! Because Splunk is involved, it’s important to understand why we’ve set up the hash tables the way we have. The host, time, source, and sourcetype keys in the top-level hash table are what Splunk calls event metadata (scroll down to Event Metadata). It’s optional to send in this data, but for me, it made sense. I wanted to guarantee that I had control over these values versus what their default values might be. The nested hash table is the event data. It’s not data about data, such as metadata is; it’s the data we collected and want Splunk to retain.

We’re going to wrap this part up, but I do want to provide how I got a hash table to nest inside of another hash table. We won’t bother working with the complete information above, but I’ll explain what you need to know. Here’s our downsized event hash table.

$Event = @{
    OperatingSystemPlatform = 'Win32NT'
    PSVersion = 'PowerShell 7.0.1'
    PSHostProgram = 'ConsoleHost'
}

And here it is, stored in the $Event variable and being used as a value in the parent, or previously used term, top-level hash table.

$EventHashTable = @{
    event = $Event
    host = 'mainpc.tommymaynard.com\TOMMYCOMPUTER'
    time = [int](Get-Date -Date (Get-Date) -UFormat %s)
    source = 'PowerShellFunctionTemplate'
    sourcetype = 'Telemetry'
}
$EventHashTable
Name                           Value
----                           -----
source                         PowerShellFunctionTemplate
sourcetype                     Telemetry
host                           mainpc.tommymaynard.com\TOMMYCOMPUTER
event                          {OperatingSystemPlatform, PSHostProgram, PSVersion}
time                           1608148655

If you opt to include the time event metadata, do recognize that it uses, and expects, epoch time. This is the number of seconds after epoch, or  June 1, 1970, 00:00:00 UTC. The -UFormat parameter with the %s parameter value will return this value. I believe this was added in Windows PowerShell 3.0, so if you have a client with an earlier version, first, I’m sorry, and second, there’s another way to obtain this value without the UFormat parameter: think datetime subtraction.

Okay, check back for more; there’s still plenty left to cover. Part III will be up soon!

Edit: Part III is now available.

Part I: Splunk, HEC, Indexer Acknowledgement, and PowerShell

A few weeks ago and I only knew one of the four words that make up this post’s title: PowerShell. I mean, I had heard of Splunk. But realistically, the other two words, which pertain to Splunk, I didn’t know. They weren’t a part of November’s vocabulary. Hopefully, you ended up here by doing some of the same searches I did. The same ones that didn’t really net me as much as I had hoped they would. I’m still no Splunk expert — I could see that taking some time and consistent interaction with the product — but, I have sent data to Splunk using PowerShell via REST. Who would have thought!? Perhaps that’s what you need to do, too.

It was a few weeks ago when I was wrestling with how I should handle collecting telemetry data, about users and their environment, when they used a function written with my function template. If just maybe you wondered, my function template is derived using some of the same techniques I wrote about in The PowerShell Conference Book: Volume 3.This book can be obtained from Leanpub, as an eBook. A print copy can be acquired from Amazon.

The initial idea was to use AWS API Gateway with DynamoDB. Lambda wouldn’t have been required, as those first two mentioned services integrate without it. I had a colleague mention Splunk though, as it’s available to me. With that, I quickly changed direction, and it’s worked out well. I didn’t get to checkoff those two AWS services from my I’ve-done-this-in-production list, but I’m nearing it for Splunk. I had wanted a reason to work with Splunk and up until that moment I didn’t have a project that offered that. At least I didn’t think I did.

There was at least one upfront requirement: I couldn’t install software. My PowerShell functions could be running anywhere, and so a REST endpoint was a requirement. Lucky for me, I had an option (that I was going to need to learn a bit about). I needed a term with which I could begin my searching, and reading, and research, and eventually my coding and integration. It was HEC. That’s pronounced “heck.” I know because I watched at least one Splunk education video. Sadly, it was later in the overall development process.

HEC, or HTTP Event Collector, is a means by which data can be securely sent to Splunk for ingestion. Read over the included link for a 20 second lesson. Seriously, it’s short and worth it! Using HEC allowed me to use a token as authentication and use REST to get my PowerShell function acquired telemetry data into Splunk. Let’s get started with some PowerShell already.

First things first, we need to protect our HEC token. You’ll need to get it from your Splunk administrator. There’s a few things out of my hands here and that, is one of them. I don’t have administrative permissions to Splunk and that’s been just fine for me. The HEC token will allow our PowerShell to authenticate to Splunk without the need for any other type of credentials. The problem was, that under no circumstances did I want to embed my HEC token in my function. Therefore, I opted to use a .env file (for now?). We’ll start with that concept. This isn’t 100% foolproof, but again, it’s much, much better than dropping my HEC token into my code. Here’s the first section of the code I’m going to share.

#region: Read .env file and create environment variables.
$FilterScript = {$_ -ne '' -and $_ -notmatch '^#'}
$Path = 'C:\Users\tommymaynard\Documents\_Repos\code\functiontemplate\random\telemetry\.env'
$Content = Get-Content -Path $Path | Where-Object -FilterScript $FilterScript
If ($Content) {
	Foreach ($Line in $Content) {
		$KVP = $Line -split '=',2; $Key = $KVP[0].Trim(); $Value = $KVP[1].Trim()
		Write-Verbose -Message "Adding an environment variable: `$env`:$Key."
		[Environment]::SetEnvironmentVariable($Key,$Value,'Process')
	} # End Foreach.
} Else {
	'$Content variable was not set. Do things without the file/the environment variables.'
}	# End If.
#endregion.

This has nothing to really do with Splunk so far, but it may help you protect your HEC token. I’d like to get the values in this file stored in Azure Key Vault someday. This is just for now… to see that everything else is working. Let me give you an example of what the .env file looks like, and then we’ll breakdown what the previously included code is doing.

# FunctionTemplate Environment Variables.

## Splunk Environment Variables.
SplunkUrl=https://splunkhectest.prod.splunk.tommymaynard.com:8088
SplunkHECToken=88a11c44-3a1c-426c-bb04-390be8b32287

The farther above code example creates and stores a script block inside the $FilterScript variable. This will exclude lines in our .env file that are empty or begin with a hash (#) character. It’s used as a part of the Get-Content command, which reads in the file’s lines that we actually want and assigns them to the $Content variable. As you can see, each line we want includes a string on the left, an equals sign (=) in the middle, and a string on the right. The remainder of the code — the Foreach loop — creates a process environment variable for each usable line during each iteration. When it’s completed, we’ll have $env:SplunkUrl and $env:SplunkHECToken and both will be populated with their respective, right side of the equals sign, value. It’s not perfect, but it’s 10x better than storing these values in the actual function template.

I’m going to stop here for now, but we’ll continue very soon with Part II. We haven’t gotten into Splunk yet, but we’re now set to do just that. With these two values (in variables), we can start working toward making our POST request to Splunk and passing along the data we want indexed.

Part II is now available.