Homestead Network Upgrades

October 22, 2017 at 1:05 pm

Despite coming from the networking side of IT, I tend to use regular consumer grade equipment at home. It typically just works, and I’m not looking for extreme reliability or features. I’ve been using hardware from Linksys, Netgear, and the other consumer network vendors for at least the last 10 years. Sometimes, though, things happen that make you reevaluate your previous life choices…

For me, that thing was an email that I received from Verizon saying my router was infected with malware. Since I always take basic precautions like changing the default password and locking down external ports, I was a bit surprised. Turns out, there was a vulnerability in the firmware that had gone unpatched for months… In hindsight, I should not have been that surprised. At all. I thought I had purchased a flagship router that would be supported for at least a few years, but it didn’t look like any more patches were coming. Ever. I looked into trusty old DD-WRT figuring that I could flash the router and at least get another year out of it, but apparently the R7000 has some performance issues with DD-WRT.

After having issues like this a few times with generic consumer grade stuff over the years, no matter the vendor, I decided enough was enough. I researched available options in the enterprise hardware space (way too expensive and time consuming to set up), looked at open source alternatives (cheap, but time consuming, and not well integrated), and even looked at the more pro-level offerings from consumer manufacturers (underwhelming). After a few days, I decided on and purchased some Ubiquiti hardware based on the many good reviews and a few personal recommendations from networking folks I respect.

Ubiquiti’s hardware is solid stuff, performance wise, and they have a very good reputation. The hardware is what I would call “Enterprise Lite”, meaning it’s not Cisco, but its perfect for small to medium businesses who just want things to work. Additionally, the Unifi configuration system and dashboard is excellent, taking a significant configuration and support burden off of me.

The initial hardware purchase was:

  • Unifi Secuirty Gateway Pro (Amazon)- I definitely went overkill here. The entry model USG is capable of routing gigabit at near wirespeed. However, I decided that I likes the extra ports for a few future projects, like the barn office.
  • Unifi Switch 8, 60 Watt (Amazon)- Since the new network was not an all-in-one setup, I needed something to power the other devices around the house. This managed switch provided a lot more than just that, though. The VLANs will come in handy when we set up the home office.
  • Unifi AP AC Pro (Amazon)- Another bit of overkill for home use, but this one was easier to justify than the firewall. Simply put, it has more power, and I need that given the 2′ thick stone walls in the farmhouse.
  • Unifi Cloud Key (Amazon)- Though not strictly necessary, the Cloud Key allows you to run your network controller app on dedicated hardware. It can also be linked to the Unifi cloud portal allowing for a very convenient and secure hybrid cloud management platform.

Simple Network DiagramThe hardware wasn’t cheap, but surprisingly, it wasn’t much more than I paid for the R7000 two years ago. If I had chosen the regular USG, the price difference would have been negligible.

As for the setup, it was easier than I thought. I racked the USG Pro, plugged in the switch, then the cloud key. Thankfully I had already run the line to the wireless AP so that was easy. I also threw in a Raspberry Pi server for fun. It took about 10 minutes to patch everything together. But what about the configuration?

Well, thanks to the Unifi software on the Cloud Key, I was able to “adopt” the other devices and have them configured in no time at all. My basic single vlan setup was ready to go out of the box. All totaled, I had the network up and running in 20 minutes. Time vs the R7000? Maybe an extra 10 minutes.

Unifi Dashboard

What has it been like living with “Enterprise Lite” hardware at home? Fantastic. Having a useful dashboard that I can glance at to see the status of the home network is a perk I didn’t think I would care about, but I’ve used it several times already. The speed is true gigabit on wired, the wireless coverage is solid, and we don’t have random drops in connectivity anymore. And as for patches… I’ve already had two patches come through for stack. It’s a simple matter of hitting the upgrade button for the device, or setting up auto-upgrade. As far as I’m concerned, I’m never going back to consumer gear again.

To Export the Unexportable Key

March 1, 2017 at 5:04 pm

Every now and then, you have to export a certificate in Windows, and someone forgot to check that little box to let you be able to do it… What is an enterprising SysAdmin to do? Enter Mimikatz (source), a tool that lets you patch the Windows crypto api and do several cool (and frightening) things. The process is very simple.

To Export an Unexportable Private Key:

  1. Create a temp directory
  2. Download the latest version of Mimikatz
  3. Extract the appropriate version (32 or 64 bit) to the temp directory
  4. Open an admin command prompt
  5. Change to the temp directory
  6. Run mimikatz
  7. Type crypto::capi
  8. And finally type crypto::certificates /export

You’ll see all of the certificates in the MY store exported into the temp directory in pfx format. The default password is mimikatz. Want another cert store? Perhaps, the computer store? Simply run crypto::certificates /export /systemstore:LOCAL_MACHINE. Check out the github wiki for documentation on this and other cool features of this powerful tool.

Authorized_Keys in Active Directory

November 21, 2015 at 6:21 pm

Now that we are implementing more Linux systems, I’m noticing some of the pain points of keeping certain things in sync. A big annoyance, for example, is keeping our infrastructure and users’ SSH keys in sync across all of our machines. There are several methods currently available, but I had issues with each. I’ve listed the two main methods below.

Via Configuration Management

A very DevOpsy way of tackling the problem would be to us a configuration management system like Chef to keep the files updated. In fact, there are several examples of this solution out there already. However, this seems a bit counter-intuitive to me. Why keep user account and related information in a config management system instead of a directory service? This is probably my Windows World bias, but there are others that agree.

Via Scripts/Dedicated Systems

From simple shell scripts, to complex systems, there are many ways to keep this data in sync. The simplest would appear to to be setting up NFS and pointing all users’ home directories there… But then you have to keep those NFS servers in sync and backed up across multiple sites, which can be problematic at scale.

Our Solution – AD/LDAP storage of SSH keys

To be up front, this was not my idea. There are many other folks who have implemented similar solutions. We are using this method specifically because we already have a robust AD infrastructure with all of our Linux authentication going through AD already (a post on this is soon to come). It probably doesn’t make sense for a group that already has a solid solution in, say, chef or puppet. For us, it did, and this is how we built it.

First, we had to extend the Active Directory schema. This is not something for the faint of heart, but is also not something to be afraid of. I followed the procedure listed here (after backing things up) and had everything ready to go in about 15 minutes. A note on the procedure: you do not need to use ADSIEdit to manage the custom attirbute afterwards. Just open AD Users and Computers and switch to the advanced view mode. Each item will then have an “attributes” tab in its properties page.

Once the schema was extended, the fun began. OpenSSH supports a config variable called “AuthorizedKeysCommand”. This allows us to call an arbitrary script to pull the users authorized_keys file. This serverfault post got me going on creating a custom command, but the output of SED wasn’t clean enough. I whipped up the following script in perl to get everything working nicely. It binds to AD using a username and password and then pulls all sshPublicKey values from the specified user account.

#!/usr/bin/perl
# Gets authorized keys from LDAP. Cleaner and supports any number of ssh keys, within reason. 
# Requires Net::LDAP.
use Net::LDAP;

$BINDDN="cn=service account,dc=example,dc=com";
$BINDPW="Password";
$SEARCHBASE="dc=example,dc=com";
$SERVER="domain or ip";
$SearchFor="samaccountname=$ARGV[0]";

$ldap = Net::LDAP->new( $SERVER ) or die "$@";
$msg = $ldap->bind( $BINDDN, password=> $BINDPW);

$result = $ldap->search( base => $SEARCHBASE,
                         filter => $SearchFor,
                        );

while (my $entry = $result->shift_entry) {
    foreach ($entry->get_value('sshPublicKey')){
        print $_ , "\n"
        } ;
}

$ldap->unbind;

Once the script is created, it can be called by adding “AuthorizedKeysCommand /path/to/script” to the sshd_config file. I also had to set the script to run as root by using the “AuthorizedKeysCommandUser root” command.

Next Steps

I want to improve this script in a few ways long-term…

  1. Since all of our Linux systems are part of our domain, there should be a way to have them bind to LDAP by using the machine’s Kerberos ticket. I don’t like using a username and password, but didn’t have the time to get the Kerberos bind working reliably.
  2. On the security front, this should be a TLS bind. No reason to have the data going over the wire cleartext.
  3. The script should not have to run as root…
  4. Cache the authorized_keys file on a per-user basis. We have a very robust AD infrastructure, but there is always a concern that it could become unavailable. The system’s resiliency would be greatly increased if it could cache the authorized_keys locally on a per-user basis, where sshd would normally look for it.
  5. Error Handling and Logging. It’s not fun, but it’s important. I wanted to get this solution out quickly, but it should be able to log to standard sources and handle some edge cases.
  6. Since the above is a lot of work, perhaps I can just improve a project like ssh-ldap-pubkey to support Kerberos.

 

External Links

I found the following links quite helpful in generating this solution.

Flexible Email Alerts for Logstash

November 13, 2015 at 5:02 pm

Logstash LogoMy company currently does a lot of it’s debug logging via email.  This means that every time an unhandled exception occurs in production, qa, uat, or integration, we get an email. Thank goodness for custom email rules and single instance storage in Exchange. Oh wait.

I have been a proponent of Logstash and the ELK stack for quite a while now. It is a wonderfully flexible framework for centralizing, enriching, and viewing log data. This past week, I built a proof of concept for management and they loved it. However, many folks wanted to know how we could send out emails from the logging system. I pointed them at the Logstash email output plugin, but they weren’t convinced. They wanted to see some flexible routing capabilities that could be leveraged in any config file, for any log type. Thankfully, this was pretty easy to accomplish.

Below I present a simple tag and filed based config for email notifications.

# This config is designed to flexibly send out email notifications 
# It *requires* certain fields to work 
# Create a tag "SendEmailAlert" 
# Required field emailAlert_to - the email address to send to 
# Required field emailAlert_subject - The subject of the email
# Required field emailAlert_body - The body, defaults to %message 
# 

output { 
  if "SendEmailAlert" in [tags] { 
    email { 
      address => "smtp.XXXXX.org" 
      username => "XXXXX" 
      password => "XXXXX" 
      via => "smtp" 
      from => "logstash.alert@XXXXXX.com" 
      to => "%{emailAlert_to}" 
      subject => "%{emailAlert_subject}" 
      body => "%{emailAlert_body}" 
      } 
   } 
} 

As the comments indicate, all you need to do is tag a message with “SendEmailAlert” and add the appropriate fields and voila: flexible email notifications. In order to use it, a simple mutate is all that is needed.

mutate {
    add_tag => ["SendEmailAlert"]
    add_field => { 
       "emailAlert_to" => "user@XXXXX.com"
       "emailAlert_subject" => "Test Alert"
       "emailAlert_body" => "%{message}"
    }
}

We could easily extend it further, but this has been fine for our POC thus far. We have also implemented similar notifications for Hipchat and PagerDuty.

Searching for Superfish using PowerShell

February 19, 2015 at 1:31 pm

Lenovo installed a piece of software that could arguably be called malware or spyware. Superfish, as this article indicates, installs a self-signed root certificate that is authoritative for everything. I wanted to be sure that this issue wasn’t present on any of our Lenovo systems, so I turned to PowerShell to help.

I found a copy of the certificate on Robert David Graham’s github here. I pulled the thumbprint from the cert which appears to be: ‎c864484869d41d2b0d32319c5a62f9315aaf2cbd

Now, some simple PowerShell code will let you run through your local certificate store and see if you have it installed.

Get-ChildItem -Recurse cert:\LocalMachine\ |where {$_.Thumbprint -eq "c864484869d41d2b0d32319c5a62f9315aaf2cbd"}

You could just as easily replace the get-childitem with “Remove-Item -Path cert:\LocalMachine\root\c864484869d41d2b0d32319c5a62f9315aaf2cbd”, but I wanted to make sure the key wasn’t installed somewhere else.

Now, to take it a step further, I use the AD commandlets and some more simple PowerShell to search all my systems for it.

Import-Module ActiveDirectory
$Cred = Get-Credential
$Computers = Get-ADComputer -Filter {enabled -eq $true} | select Name
foreach ($Computer in $Computers) {
 try{
 if(test-connection -Count 1 -ComputerName $Computer.Name){
 write-output (invoke-command -ComputerName $Computer.Name -Credential $Cred -ScriptBlock {Get-ChildItem -Recurse cert:\LocalMachine\ |where {$_.Thumbprint -eq "‎c864484869d41d2b0d32319c5a62f9315aaf2cbd"}})
 }
 }catch{
 Write-Error ("There was an issue connecting to computer $Computer : " + $_.Exception)
 }
}

Is it perfect? No. But it gets the job done in relatively short order.

Intro To Chocolatey at NJLOPSA

December 4, 2014 at 12:00 am

Chocolatey Logo

I will be giving a presentation on Chocolatey, a Windows package manager, tonight at the New Jersey League of Professional Systems Administrators meetup. It is being held at the Lawrence Headquarters Branch of the Mercer County Library, 2751 Brunswick Pike, Lawrenceville, NJ. Come by and get some cake, meet some folks, and learn about a great tool!
For more details and to register, head over to the meetup: http://www.meetup.com/LOPSA-NJ/events/218257852/

A Hundred Domains and SHA-1 Depreciation

September 17, 2014 at 4:30 pm

Apparently I’ve been living under a rock for a while, because I didn’t know that SHA-1 was being phased out in the immediate future. Thank you, GoDaddy, for notifying me with a month and change to spare. As it turns out, Google will no longer be trusting certain SHA-1 signed SSL certificates with the release of Chrome 39, which is set for November. For details, see the following links.

Due to the fact that our clients often purchase their own SSL certificates, we have no internal records to check what algorithm was used to sign the certificates in use. So now we get to audit slightly over 100 domains to check and see what signature algorithm is in use. We could browse to each domain manually and take a look at their certificate but that would just take way too long. There were some web based tools around that could do it, but they also only worked on one site at a time.

So, instead, I looked to PowerShell to see what could be done… Unfortunately, there was no native cmdlet to do anything like this! I did find a module that had a lot of great PKI-related functionality, the Public Key Infrastructure PowerShell module, but it, too, didn’t have the much-needed signature algorithm. However, it did provide a very robust base on which to build. Below is the solution I came up with.

function get-SSLSigningAlgorithm {
[CmdletBinding()]
    param(
        [Parameter(Mandatory = $true, ValueFromPipeline = $true, Position = 0)]
        [string]$URL,
        [Parameter(Position = 1)]
        [ValidateRange(1,65535)]
        [int]$Port = 443,
        [Parameter(Position = 2)]
        [Net.WebProxy]$Proxy,
        [Parameter(Position = 3)]
        [int]$Timeout = 15000,
        [switch]$UseUserContext
    )
    $ConnectString = "https://$url`:$port"
    $WebRequest = [Net.WebRequest]::Create($ConnectString)
    $WebRequest.Proxy = $Proxy
    $WebRequest.Credentials = $null
    $WebRequest.Timeout = $Timeout
    $WebRequest.AllowAutoRedirect = $true
    [Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
    try {$Response = $WebRequest.GetResponse()}
    catch {}
    if ($WebRequest.ServicePoint.Certificate -ne $null) {
        $Cert = [Security.Cryptography.X509Certificates.X509Certificate2]$WebRequest.ServicePoint.Certificate.Handle
	write-host $Cert.SignatureAlgorithm.FriendlyName;
    } else {
        Write-Error $Error[0]
    }
}

I’ll create a CSV of the domains that I need to check, and iterate over them in a for-each loop. That function will be used within the loop to check the sites, and the output will go into another CSV. We’ll use that to plan our re-keying.

Hide Disabled AD Accounts from the GAL using Powershell

September 8, 2014 at 10:48 am

Our account decommission process involves disabling a user and moving them to a “Disabled Domain Accounts” OU. Well, it turns out that our previous admin never actually hid these mailboxes from the Global Address List (GAL), so many of our offshore partners have still been sending emails to them. I decided to start cleaning this up a bit today with the following:

Search-ADAccount -SearchBase "ou=Disabled Domain Accounts,dc=example,dc=local" -AccountDisabled -UsersOnly |Set-ADUser  -Replace @{msExchHideFromAddressLists=$true}

Another simple bit of PowerShell. The first command searches within the disabled account OU, and looks for disabled user accounts only. That output is piped into the second command which replaces the Exchange attribute that hides that account from the GAL.

How to clear all Workstation DNS caches from PowerShell

September 4, 2014 at 2:32 pm

I recently found myself in need of the ability to clear the DNS cache of all the laptops in my company. I found a very powerful and simple way to do so and thought I would share.

$c = Get-ADComputer -Filter {operatingsystem -notlike "*server*" }
Invoke-Command -cn $c.name -SCRIPT { ipconfig /flushdns }

The first line queries Active Directory for all computers that are not servers. The second line simply invokes the normal windows command “ipconfig /flushdns” on all computers.

This technique could be used to run any command across all workstations. Very powerful, and dangerous. Use at your own risk!

Finding Expired User Accounts in AD and Resetting Their Passwords with PowerShell

June 2, 2014 at 4:01 pm

The Setup

I came into the office today and was bombarded with users not being able to access our TFS server. Now, before I get too far into this story, you have to understand: Technically I’m only responsible for client-facing infrastructure. However, over the years I’ve started wearing more of a devops hat because, apparently, I’m quite good at it. That means TFS is now largely my problem. Funny how that works, eh? Anyway, back to TFS.

There were a few odd things about this issue: the oddest being that some of our off-shore developers were having no problems and others just couldn’t get in. The users with issues also couldn’t access the web portal. We (at least me) hadn’t made any changes to TFS in about a month, so I started to investigate.

After a brief panic about SharePoint not being installed properly (Hey, I didn’t set up this system, I’m just its current keeper) I managed to trace the issue to network logons. Thank you Security log! Wait, what’s this? Turns out many, many users recently had their accounts marked as expired… Turns out we just implemented mandatory password rotation and guess what? Today – 90 days was the day that a large batch of offshore development accounts were created! So now I had to reset credentials on 35+ accounts, and I’ll be damned if I’m going to do that manually!

Enter PowerShell!

List all accounts in an OU that have expired passwords

Get-ADUser -searchbase "ou=contractors,dc=example,dc=com" -filter {Enabled -eq $True} -Prop PasswordExpired | Where {$_.PasswordExpired } |select-object -property SAMAccountName,Name,PasswordExpired |format-table

Get-ADUser

SearchBase tells the Get-ADUser command to limit the search to a specific OU. This is very handy since I only have admin access to the one OU anyway. I filtered only for enabled accounts since trying to filter on PasswordExpired here didn’t work for some reason. I also explicitly called out the PasswordExpired property.  This output was piped to the where-object commandlet.

Where-Object

This was where I filtered on the current object group. Since passwordExpired is a bool, no fanciness needed here. Then I piped the output to Select-Object.

Select-Object

I only cared about some specific data for the output. I used this to select the properties I needed. Finally, I piped to Format-Table to make everything display nicely.

Reset passwords for accounts in an OU with expired passwords

Get-ADUser -searchbase "ou=contractors,dc=example,dc=com" -filter {Enabled -eq $True} -Prop PasswordExpired | Where {$_.PasswordExpired } | ForEach-Object {Set-ADAccountPassword -Identity $_.SAMAccountName -NewPassword (ConvertTo-SecureString -AsPlainText "Changeme1" -Force) }

Get-ADUser & Where-Object

These are the same as in the section above. We are filtering for enabled accounts in the contractors OU. This was piped to one of my favorite commands on earth: ForEach-Object.

ForEach-Object

This is, hands down, one of the handiest commands in PowerShell. Or any language for that matter. In this particular instance, we are running the Set-ADAccountPassword option for each object that we pass in. We pass the object’s SAMAccountName as the identity. We then create a new secure string password and pass that to -NewPassword. Then you hit enter and the magic runs!