Automagically Secure PowerShell Remote Sessions

In my previous post, PowerShell for Good and Sometimes for Evil, I detailed a few steps that you can take to help secure your systems from malicious use of remote PowerShell sessions. I very quickly realized, as I’m sure many will, that the manual steps are great to understand the concept of what’s being done…but doesn’t really help me in a real life administration setting. I now need to manually configure a bunch of different items on potentially dozens of servers.

That’s no good. If only there was some sort of technology out there to help me script these kinds of tedious administrative tasks to make life easier. Oh wait, isn’t that what this blog is all about?

Right! So to make this configuration a little more real work effective lets put together a script to do this work for me within a deployment. Just a couple of points before I really dive into my first PowerShell script in this blog about how I view PowerShell and writing scripts. Of course it’s an art form and everyone does it differently, so hopefully this helps give you an understanding of my scripting style

  • When I initially write a script I try to make it as simple as possible to make sure it does what I want to do, then layer on usability and extensibility later. It’s a lot easier to start with a simple script and get it working before adding a bunch of error handling, and script parameters. A good chunk of my scripts starting out here will be fairly basic in this sense, I’d prefer to give you something I know will work and give you an understanding of what the script does, and you can add whatever logging, or environment logic to fit your needs.
  • I try to avoid a lot of the short hand and aliasing that exists in PowerShell. Those two things are extremely helpful when you’re a PowerShell guru and you want to fire out scripts that are massive and would otherwise take a lot of time to write. I find a lot of my scripts end up being consumed by people who have a very basic PowerShell knowledge, and using short hand and aliases just makes a script very difficult to follow.
  • I try to always work off the latest and greatest version of PowerShell. This is really a no-brainer, every single release it gets better and easier to use. So if you have bits of script that don’t apply because you’re using PSv2, please upgrade – you won’t regret it, I promise.

Alright, with all that out of the way, lets dive into the script.

With this script I’m assuming that all your servers are part of your domain, and each has a Computer certificate available. In a later iteration of this script we could integrate in a certificate request from the Enterprise CA, but for simplicity let’s assume you’ve already got one. I’m also assuming you have the Remote Server Administration Tools (RSAT) installed, as well as the Group Policy Management feature enabled if you wish to do the bits with the GPO.

First thing we’ll want to do is get some administrator credentials to do all the operations we want. It’s bad practice to hard code usernames and passwords in your insecure scripts, so let’s use Get-Credential to securely collect that

First thing we’ll do is retrieve a list of machines we want to apply this to. We can do this using the Get-ADComputer cmdlet (don’t forget to have the RSAT feature enabled on Windows or you won’t have the cmdlets).

I keep my servers in a root OU named Servers, so my search base comes out looking like this (of course, replace local613 and com with your domain and domain post-fix).

Great, now I’ve got a list of machines in my $serverList variable, lets get to work by setting up WinRM to accept HTTPS connections. I’ll do this remotely by using the Invoke-Command cmdlet and using the administrator credentials I collected earlier

I’m using the -force switch on the WinRM command, since there should already be an HTTP listener running on the server, and WinRM will complain there’s an existing configuration and the WinRM service is already running.

Off-Topic – Zimbra Installation on CentOS Host File Error

Okay so this is a little off topic from PowerShell but I had to post this, since I spent a couple days struggling with this, and a lot of time on google and didn’t find an answer that worked for me. Now that I figured out my problem, I have to blog about it so hopefully, someone, somewhere will be saved by this if it ever pops up in Google.

So here’s the error I was receiving:

ERROR: Installation can not proceeed. Please fix your /etc/hosts file
to contain:

<ip> <FQHN> <HN>

Where <IP> is the ip address of the host,
<FQHN> is the FULLY QUALIFIED host name, and
<HN> is the (optional) hostname-only portion

Okay, that sounds simple enough doesn’t it? Just go pop open my /etc/hosts file and make sure I have the usual bits in there. Did that. Same error. No matter how I formatted my hosts file.

For the record, here’s what was in my hosts file

192.168.56.68 mail.mydomain.com mail
127.0.0.1 localhost.localdomain localhost

Looks golden to me. Even flipping the FQDN and the hostname didn’t fix the problem.

Finally, I found it.

My hostname in my configuration was set to the FQDN of mail.mydomain.com when in reality it should have just been mail, without the domain. Configured that properly, and voila, Zimbra installs no problem.

*Facepalm*

I really do hope this helps someone, somewhere, sometime.

#I’mGoingBackToWindows

PowerShell for Good and Sometimes for Evil

I read an interesting article on a flight back home yesterday which detailed a brazen bank heist in Russia in which the bank robbers used remote PowerShell commands to instruct an ATM to spit out hundreds of thousands of dollars in cash. Interestingly the ATM itself wasn’t hacked in a way that I expected where someone gained access to an input panel and loaded up some malware, instead the hackers had managed to get into the broader bank network, create a series of tunnels to eventually get to the ATM network and issue the commands. This specific attack used malware that self deleted, but either by mistake or code error left some logs behind which allowed security researchers to back track and figure out what went wrong. You can read the full article here.

This got me wondering, in a world of secure network deployments – where there may be some tunneling from other networks, how can I protect systems from executing malicious code from a remote source. The most obvious answer is to block the PowerShell Remoting and WinRM ports on the firewall from the broader network (Ports 5985 and 5986 for HTTP and HTTP/s respectively). That should generally protect the systems unless the Firewall is compromised, or physical access to the servers is obtained. This is a nice solution since it doesn’t restrict me from being able to use Remote PowerShell sessions when an authorized party has access to the system.

I can also change these from the default to obfuscate them somewhat from the outside. This is a good security practice since using default ports just makes things easy for an attacker – and the more road blocks we put up the less likely they are to be successful. We can change the port for WinRM using the following command

You’ll want to replace * with the auto-completed Listener name (if you have an HTTP and an HTTP/s listener), and you’ll have to run it once for the HTTP listener, then run it again with a different port for the HTTP/s listener if one exists. In my case I don’t yet have an HTTP/s WinRM listener configured so I can get away with using the wildcard.

Restarting the WinRM service is required, and can be done with the following command

Now when I’m initiating a Remote PowerShell session, I need to ensure I’m specifying the now non-standard port by using the -Port switch in Enter-PSSession

This puts me in a fairly comfortable position now in the event that someone does gain access to my closed network and attempts to do some PowerShell remoting. My final step to really help me sleep at night is to make sure that when I am using PowerShell remote sessions I’m doing so securely, lest a pesky network sniffer gain valuable intel on the system based on my administrative work. To do this, I’m going to use a couple Group Policy objects to ensure the WinRM Service isn’t accepting insecure connections, in case I forget to connect securely during a 3AM emergency service call.

Within my default domain policy, I’ll configure a couple settings:

Computer Configuration\Policies\Administrative Templates\Windows Components\Windows Remote Management (WinRM)\WinRM Service\Allow Basic Authentication -> Disabled

Computer Configuration\Policies\Administrative Templates\Windows Components\Windows Remote Management (WinRM)\WinRM Service\Allow CredSSP Authentication -> Disabled

Computer Configuration\Policies\Administrative Templates\Windows Components\Windows Remote Management (WinRM)\WinRM Service\Allow Unencrypted Traffic -> Disabled

Now that we’ve made these changes in the GPO, I’ll have to go configure WinRM for HTTP/s on my original server. There’s a great Microsoft Support article on the subject here but for brevity, here’s the steps

  • Launch the Certificates Snapin for Local Computer
  • Ensure within Personal\Certificates a Server Authenticating certificate exists, if not request one through your domain CA
  • Use this command to quick configure WinRM
    winrm quickconfig -transport:https

Now when I connect I’ll need to ensure that I use the following command

It’s not perfect, and it wont stop everyone, but as my father used to say it’s just enough to keep the honest people honest. For those going the less than honest route, I also found this white paper from the Black Hat USA 2014 conference on Investigating PowerShell Attacks to be extremely interesting.

There’s a few good resources out there on the security considerations of remote PowerShell sessions, specifically this one from JuanPablo Jofre on MSDN, and this one from Ed Wilson, of Scripting Guys.

 

My Dream Project Is Finally Happening!

So here it is, I’m finally going to get to kick off one of my longer term dream projects. I’m going to be writing a PowerShell library to replace one of our internal tools based out of Microsoft Management Console. I’ve never written a PowerShell snap-in before, so I figure it’s a great learning opportunity to discuss my trials, tribulations, and ultimately my wins as I go along.

A large portion of my blog moving forward will cover the work I’m putting in to the C# code behind the PowerShell snap-in, and the scripts I’ll use to test the various aspects.

Given the nature of the work that I do, I will likely be generalizing a lot of code and using placeholder variable names, or posting only small code snippits as opposed to what I’d prefer to do – which is push it all to Git and you could use it as you see fit.

Stay tuned, it’s going to be an exciting few weeks!

When Environment Variables Go Wrong

One of the things I always try to impart on people leaning PowerShell is to make your scripts portable. While we’re usually writing scripts to solve a specific problem for ourselves in a specific environment, it really sucks when your friend at Acme IT Services is having an identical problem and you say “Oh I’ve got a great script to do that!”, but every path and location is hard coded and doesn’t work when they run it and you both spend a couple hours trying to integrate it to their situation. If that isn’t reason enough, even taking your own scripts from environment to environment, or job to job can be a total nightmare if you haven’t designed them with portability in mind.

This is where environment variables come in real handy, especially in larger, more secure or higher performance server deployments that are making use of system partitions and non-standard locations for various folders, or you’re running against various Windows kernel versions (XP vs Windows 7 for example).

Environment variables are a great tool, and there’s a lot of information you can get from them (just see a little list here https://ss64.com/nt/syntax-variables.html). However…I just ran into a problem with environment variables I obviously didn’t foresee. When a script is being run by a user, you have all kinds of environment variables available to you, but when you’re automating a script run and running the script as NETWORK SERVICE for example…you have a much smaller subset of environment variables (if any at all).

In my specific case, I have a script that needs to copy a file to a share in DFS, and I want the domain to be automatically populated in the path based on where the script is being run.

So here’s my code snip

Looks good to me, when I run it my file goes where I want it to go, everything is fantastic.

Now when I go to automate the script using Windows Task Scheduler. In this environment the security requirements don’t allow me to store a password for a service account when I create the scheduled task. Probably a good thing, since if the service account password changes this task would fail and likely nobody would notice for a good while. Since I’m accessing network resources, I’m electing to run the task as NETWORK SERVICE. However, when I run the script my file doesn’t end up in the location. My logging of course shows we’ve had an issue saving the file, but I don’t get the same error when I run the log in PowerShell ISE. That’s suspicious.

Turns out NETWORK SERVICE doesn’t have a USERDNSDOMAIN since it’s not really a user. So herein lies my problem. I’ve accounted for the portability of a script from one system to another, but I haven’t accounted for the specific account that’s running the script.

Well now how do we solve this problem? I still don’t want to hard code values, or build in a global settings section…since as a cardinal rule of mine I don’t want people to edit my scripts after I give them out in case they don’t know PowerShell. This is where WMI becomes a handy alternative to environment variables, without impacting script performance – and arguably giving me a much wider breadth of information.  A very quick modification to my script has me add a domain variable, and modify my Out-File statement slightly

Problem solved!

Now if I wanted to change my script behavior depending on if the variables are available or not, I could do something like this

Granted, the performance difference between the two is negligible but when I use the Measure-Command cmdlet to determine execution time, Environment Variables are clearly the faster option

As a matter of fact, Environment Variables are so fast they don’t even measure into full milliseconds. If this was a very large script, doing WMI queries could significantly impact our script performance.

That gives me an idea about doing a post on script performance…

I hope you’ve learned something from my little investigation here, it’s certainly given me one more thing to think about when designing my scripts.

Happy PowerShelling!

The Trouble With File Encoding

One of the most frustrating things I deal with on a regular basis when it comes to exporting data to a file for ingestion into another application is file encoding.

I understand the need for different character encoding given the vast differences in languages and the technical hurdle of having a code page that isn’t massive and slows things down trying to decode various characters.

My understanding however does nothing to curb my frustrations.

I’m currently working on a project where I’m using PowerShell to pull some data from SQL and create a file that’s consumed by a third party application. I found that simply using the default PowerShell file encoding on an Out-File operation like this

$Output | Out-File "\\share\directory\filename"

What I ran into was when my third party application attempted to ingest the file, I found it was completely ignoring it and using the default instead. As it turns out PowerShell defaults to Unicode with Out-File, and my application was expecting a UTF-8 encoded file.

Now of course in my mind the perfect way to resolve the problem is that the application should handle whatever file encoding it gets and happily load it. Since we don’t live in a perfect world, and since I don’t have control over this third party application, I have to find another way.

Of course there’s a few options out there to determine the file encoding (a quick Google Search brought me to http://poshcode.org/2153) and then modifying my Out-File command as such

$Output | Out-File "\\share\directory\filename" -Encoding UTF8

There really must be a better way that my export could be smarter based on the target file if it already exists, or something to that effect.

I’m going to leave this post here as a TODO:Fix this problem with PowerShell.

Navigating Connection Strings

I recently had a project in which I was required to connect to an SQL Server, and I always have to bring up my trusty connection string reference – which is usually the Microsoft Technet article that outlines all the different connection string parameters.

This had me wondering if there really was a better reference, something a little more easy to digest and use.

Hidden away inside another TechNet article I was reading was this little gem:

https://www.connectionstrings.com

Needless to say, I have a brand new reference bookmark for SQL Connection strings.