Popular Tags:

Authorized_Keys in Active Directory

November 21, 2015 at 6:21 pm

Now that we are implementing more Linux systems, I’m noticing some of the pain points of keeping certain things in sync. A big annoyance, for example, is keeping our infrastructure and users’ SSH keys in sync across all of our machines. There are several methods currently available, but I had issues with each. I’ve listed the two main methods below.

Via Configuration Management

A very DevOpsy way of tackling the problem would be to us a configuration management system like Chef to keep the files updated. In fact, there are several examples of this solution out there already. However, this seems a bit counter-intuitive to me. Why keep user account and related information in a config management system instead of a directory service? This is probably my Windows World bias, but there are others that agree.

Via Scripts/Dedicated Systems

From simple shell scripts, to complex systems, there are many ways to keep this data in sync. The simplest would appear to to be setting up NFS and pointing all users’ home directories there… But then you have to keep those NFS servers in sync and backed up across multiple sites, which can be problematic at scale.

Our Solution – AD/LDAP storage of SSH keys

To be up front, this was not my idea. There are many other folks who have implemented similar solutions. We are using this method specifically because we already have a robust AD infrastructure with all of our Linux authentication going through AD already (a post on this is soon to come). It probably doesn’t make sense for a group that already has a solid solution in, say, chef or puppet. For us, it did, and this is how we built it.

First, we had to extend the Active Directory schema. This is not something for the faint of heart, but is also not something to be afraid of. I followed the procedure listed here (after backing things up) and had everything ready to go in about 15 minutes. A note on the procedure: you do not need to use ADSIEdit to manage the custom attirbute afterwards. Just open AD Users and Computers and switch to the advanced view mode. Each item will then have an “attributes” tab in its properties page.

Once the schema was extended, the fun began. OpenSSH supports a config variable called “AuthorizedKeysCommand”. This allows us to call an arbitrary script to pull the users authorized_keys file. This serverfault post got me going on creating a custom command, but the output of SED wasn’t clean enough. I whipped up the following script in perl to get everything working nicely. It binds to AD using a username and password and then pulls all sshPublicKey values from the specified user account.

#!/usr/bin/perl
# Gets authorized keys from LDAP. Cleaner and supports any number of ssh keys, within reason. 
# Requires Net::LDAP.
use Net::LDAP;

$BINDDN="cn=service account,dc=example,dc=com";
$BINDPW="Password";
$SEARCHBASE="dc=example,dc=com";
$SERVER="domain or ip";
$SearchFor="samaccountname=$ARGV[0]";

$ldap = Net::LDAP->new( $SERVER ) or die "$@";
$msg = $ldap->bind( $BINDDN, password=> $BINDPW);

$result = $ldap->search( base => $SEARCHBASE,
                         filter => $SearchFor,
                        );

while (my $entry = $result->shift_entry) {
    foreach ($entry->get_value('sshPublicKey')){
        print $_ , "\n"
        } ;
}

$ldap->unbind;

Once the script is created, it can be called by adding “AuthorizedKeysCommand /path/to/script” to the sshd_config file. I also had to set the script to run as root by using the “AuthorizedKeysCommandUser root” command.

Next Steps

I want to improve this script in a few ways long-term…

  1. Since all of our Linux systems are part of our domain, there should be a way to have them bind to LDAP by using the machine’s Kerberos ticket. I don’t like using a username and password, but didn’t have the time to get the Kerberos bind working reliably.
  2. On the security front, this should be a TLS bind. No reason to have the data going over the wire cleartext.
  3. The script should not have to run as root…
  4. Cache the authorized_keys file on a per-user basis. We have a very robust AD infrastructure, but there is always a concern that it could become unavailable. The system’s resiliency would be greatly increased if it could cache the authorized_keys locally on a per-user basis, where sshd would normally look for it.
  5. Error Handling and Logging. It’s not fun, but it’s important. I wanted to get this solution out quickly, but it should be able to log to standard sources and handle some edge cases.
  6. Since the above is a lot of work, perhaps I can just improve a project like ssh-ldap-pubkey to support Kerberos.

 

External Links

I found the following links quite helpful in generating this solution.

Flexible Email Alerts for Logstash

November 13, 2015 at 5:02 pm

Logstash LogoMy company currently does a lot of it’s debug logging via email.  This means that every time an unhandled exception occurs in production, qa, uat, or integration, we get an email. Thank goodness for custom email rules and single instance storage in Exchange. Oh wait.

I have been a proponent of Logstash and the ELK stack for quite a while now. It is a wonderfully flexible framework for centralizing, enriching, and viewing log data. This past week, I built a proof of concept for management and they loved it. However, many folks wanted to know how we could send out emails from the logging system. I pointed them at the Logstash email output plugin, but they weren’t convinced. They wanted to see some flexible routing capabilities that could be leveraged in any config file, for any log type. Thankfully, this was pretty easy to accomplish.

Below I present a simple tag and filed based config for email notifications.

# This config is designed to flexibly send out email notifications 
# It *requires* certain fields to work 
# Create a tag "SendEmailAlert" 
# Required field emailAlert_to - the email address to send to 
# Required field emailAlert_subject - The subject of the email
# Required field emailAlert_body - The body, defaults to %message 
# 

output { 
  if "SendEmailAlert" in [tags] { 
    email { 
      address => "smtp.XXXXX.org" 
      username => "XXXXX" 
      password => "XXXXX" 
      via => "smtp" 
      from => "logstash.alert@XXXXXX.com" 
      to => "%{emailAlert_to}" 
      subject => "%{emailAlert_subject}" 
      body => "%{emailAlert_body}" 
      } 
   } 
} 

As the comments indicate, all you need to do is tag a message with “SendEmailAlert” and add the appropriate fields and voila: flexible email notifications. In order to use it, a simple mutate is all that is needed.

mutate {
    add_tag => ["SendEmailAlert"]
    add_field => { 
       "emailAlert_to" => "user@XXXXX.com"
       "emailAlert_subject" => "Test Alert"
       "emailAlert_body" => "%{message}"
    }
}

We could easily extend it further, but this has been fine for our POC thus far. We have also implemented similar notifications for Hipchat and PagerDuty.

What to do with an old Christmas tree farm?

October 21, 2015 at 4:29 pm
It's dark in there...

It’s dark in there…

As the missus and I sit and talk about our new homestead and the directions that we are thinking about taking it, one problem keeps coming up: the old Christmas tree stand. You see, dear reader, our homestead used to be a Christmas tree farm back in the 80s. Unfortunatly, the previous owners decided not to keep the farm going and let the trees grow up. On the surface this may not appear to be an issue, that is, until you consider planting densities.

Normal pine tree stands are planted at about 400-500 trees per acre. This allows for them to grow straight and healthy. Stands like that can be used for lumber and wood pulp and can net a good amount of money when they mature. However, Christmas tree farms are planted at 1,000 – 1,500 trees per acre. This is no problem if trees are kept small and regularly trimmed… Unfortunatly, that’s no the case here. Our stand is dense. It’s dark in there. This level of density leads to really unhealthy trees, and from the research I’ve been doing, it appears that there is not much that can be done.

It seems that our options are limited to the following:

  • Leave it be – The trees will keep growing, and will start dying off. This will likely result in a bad situation for both domestic and wild animals, not to mention the lack of productivity of that patch of the homestead.
  • Selective thinning – This would involve either getting a lumber/pulp company in to selectively harvest every other row of trees. This may not be an option because of the density. You can’t really get equipment in there. That means it might just be me with a chainsaw.
  • Harvest the whole thing – This is the option that I really don’t like, but seems to be the best all around. It would net some cash from the sale of the wood and would allow us to plant a new, healthy, forest and silvopasture using permaculture principles. The main problem here would be handling the stumps and the time it would take for a new forest to establish itself.

In case anyone is interested, I’ve also compiled a few links on the topic.

And here is a are some additional photos:

Finding a Home(stead)

July 12, 2015 at 3:36 pm

Finding a place to call your own is quite a step in life. To me, it means you’re ready to settle in, put down some roots, and potentially create a legacy. After discussing various options with my wife, we decided now would be a good time to consider buying a home. I thought would document the reasons for a the move for posterity’s sake.

Reasons

Investment

The thing that started us down this path was a simple realization: we pay a lot out in rent, and it would be really nice to “invest” that money instead. My wife and I currently live in a nice townhouse, in a good planned development near Princeton, NJ. Though we live within our means, we still pay out quite a bit for rent. For that, we get three bedrooms (really one bedroom and two offices), access to a pool, and lots of restrictions (including a no-charcoal-grill policy).

At the time, it made sense for us to move in to the development and not worry about anything. We could concentrate on paying off our debts, saving money, and helping my mother sell her old home in Queens, NY. Now that we’re done with all of that, though, it’s time to move on. Although many don’t consider a home an investment, putting equity in our pocket is certainly better than having it leave the family entirely.

Healthy, Self-Sufficient Lifestyle

My wife and I have recently started to adopt a “healthy lifestyle”. For us, that means eating more natural foods with fewer carbs and processed junk. The first thing we realized after making these changes was that the cost of high-quality food adds up quick. Then next thing we noticed was that we really did feel a lot better, so the premium was worth it.

Aside from food, we spend a lot of time commuting. Currently, my wife is commuting two hours, one way, by public transit, daily. This just isn’t healthy for so many reasons. I am in a better position, commuting a little over an hour one-way, by car, twice a day, but this takes quite a toll on us in many ways.

We don’t eat until 8:30-9:00 PM most nights. When you go to bed at 10 PM, this is a really bad thing. Second, we don’t have time for each other. Having an hour, two if you’re lucky, to catch up with your significant other is a terrible way to live a life. I love my wife, it’s why I married her! Finally, we don’t have time for our hobbies. My wife is a bit luckier in this regard as her hobby, knitting, is portable. I have taken to listening to podcasts on my commute as no one wants to listen to a table saw going at 9:30PM.

A Base of Operations with a Sense of Permanence

Living in a development has taught me two things: I love having someone else mow the lawn / shovel snow, and I hate not having a back yard to do projects in. Even with two offices, there’s no place for a shop, garden, or sheep. Yes, you read that right: sheep.

My wife is a knitter and she loves her hobby. She wants sheep, and I want her to be happy, so we need a place for sheep. My hobby is woodworking and building things. I’ve always been limited to a corner of a garage or basement, when I’ve been lucky enough to have a place at all. So room for a shop is a must for me. Finally there’s the garden. We had one in the past, and it was a wonderful experience. Now that we are focusing on healthier eating, what better way to get the best produce at the best prices than to grow it ourselves?

So on to the home search!

Searching for Superfish using PowerShell

February 19, 2015 at 1:31 pm

Lenovo installed a piece of software that could arguably be called malware or spyware. Superfish, as this article indicates, installs a self-signed root certificate that is authoritative for everything. I wanted to be sure that this issue wasn’t present on any of our Lenovo systems, so I turned to PowerShell to help.

I found a copy of the certificate on Robert David Graham’s github here. I pulled the thumbprint from the cert which appears to be: ‎c864484869d41d2b0d32319c5a62f9315aaf2cbd

Now, some simple PowerShell code will let you run through your local certificate store and see if you have it installed.

Get-ChildItem -Recurse cert:\LocalMachine\ |where {$_.Thumbprint -eq "c864484869d41d2b0d32319c5a62f9315aaf2cbd"}

You could just as easily replace the get-childitem with “Remove-Item -Path cert:\LocalMachine\root\c864484869d41d2b0d32319c5a62f9315aaf2cbd”, but I wanted to make sure the key wasn’t installed somewhere else.

Now, to take it a step further, I use the AD commandlets and some more simple PowerShell to search all my systems for it.

Import-Module ActiveDirectory
$Cred = Get-Credential
$Computers = Get-ADComputer -Filter {enabled -eq $true} | select Name
foreach ($Computer in $Computers) {
 try{
 if(test-connection -Count 1 -ComputerName $Computer.Name){
 write-output (invoke-command -ComputerName $Computer.Name -Credential $Cred -ScriptBlock {Get-ChildItem -Recurse cert:\LocalMachine\ |where {$_.Thumbprint -eq "‎c864484869d41d2b0d32319c5a62f9315aaf2cbd"}})
 }
 }catch{
 Write-Error ("There was an issue connecting to computer $Computer : " + $_.Exception)
 }
}

Is it perfect? No. But it gets the job done in relatively short order.

Intro To Chocolatey at NJLOPSA

December 4, 2014 at 12:00 am

Chocolatey Logo

I will be giving a presentation on Chocolatey, a Windows package manager, tonight at the New Jersey League of Professional Systems Administrators meetup. It is being held at the Lawrence Headquarters Branch of the Mercer County Library, 2751 Brunswick Pike, Lawrenceville, NJ. Come by and get some cake, meet some folks, and learn about a great tool!
For more details and to register, head over to the meetup: http://www.meetup.com/LOPSA-NJ/events/218257852/

A Hundred Domains and SHA-1 Depreciation

September 17, 2014 at 4:30 pm

Apparently I’ve been living under a rock for a while, because I didn’t know that SHA-1 was being phased out in the immediate future. Thank you, GoDaddy, for notifying me with a month and change to spare. As it turns out, Google will no longer be trusting certain SHA-1 signed SSL certificates with the release of Chrome 39, which is set for November. For details, see the following links.

Due to the fact that our clients often purchase their own SSL certificates, we have no internal records to check what algorithm was used to sign the certificates in use. So now we get to audit slightly over 100 domains to check and see what signature algorithm is in use. We could browse to each domain manually and take a look at their certificate but that would just take way too long. There were some web based tools around that could do it, but they also only worked on one site at a time.

So, instead, I looked to PowerShell to see what could be done… Unfortunately, there was no native cmdlet to do anything like this! I did find a module that had a lot of great PKI-related functionality, the Public Key Infrastructure PowerShell module, but it, too, didn’t have the much-needed signature algorithm. However, it did provide a very robust base on which to build. Below is the solution I came up with.

function get-SSLSigningAlgorithm {
[CmdletBinding()]
    param(
        [Parameter(Mandatory = $true, ValueFromPipeline = $true, Position = 0)]
        [string]$URL,
        [Parameter(Position = 1)]
        [ValidateRange(1,65535)]
        [int]$Port = 443,
        [Parameter(Position = 2)]
        [Net.WebProxy]$Proxy,
        [Parameter(Position = 3)]
        [int]$Timeout = 15000,
        [switch]$UseUserContext
    )
    $ConnectString = "https://$url`:$port"
    $WebRequest = [Net.WebRequest]::Create($ConnectString)
    $WebRequest.Proxy = $Proxy
    $WebRequest.Credentials = $null
    $WebRequest.Timeout = $Timeout
    $WebRequest.AllowAutoRedirect = $true
    [Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
    try {$Response = $WebRequest.GetResponse()}
    catch {}
    if ($WebRequest.ServicePoint.Certificate -ne $null) {
        $Cert = [Security.Cryptography.X509Certificates.X509Certificate2]$WebRequest.ServicePoint.Certificate.Handle
	write-host $Cert.SignatureAlgorithm.FriendlyName;
    } else {
        Write-Error $Error[0]
    }
}

I’ll create a CSV of the domains that I need to check, and iterate over them in a for-each loop. That function will be used within the loop to check the sites, and the output will go into another CSV. We’ll use that to plan our re-keying.

Hide Disabled AD Accounts from the GAL using Powershell

September 8, 2014 at 10:48 am

Our account decommission process involves disabling a user and moving them to a “Disabled Domain Accounts” OU. Well, it turns out that our previous admin never actually hid these mailboxes from the Global Address List (GAL), so many of our offshore partners have still been sending emails to them. I decided to start cleaning this up a bit today with the following:

Search-ADAccount -SearchBase "ou=Disabled Domain Accounts,dc=example,dc=local" -AccountDisabled -UsersOnly |Set-ADUser  -Replace @{msExchHideFromAddressLists=$true}

Another simple bit of PowerShell. The first command searches within the disabled account OU, and looks for disabled user accounts only. That output is piped into the second command which replaces the Exchange attribute that hides that account from the GAL.

How to clear all Workstation DNS caches from PowerShell

September 4, 2014 at 2:32 pm

I recently found myself in need of the ability to clear the DNS cache of all the laptops in my company. I found a very powerful and simple way to do so and thought I would share.

$c = Get-ADComputer -Filter {operatingsystem -notlike "*server*" }
Invoke-Command -cn $c.name -SCRIPT { ipconfig /flushdns }

The first line queries Active Directory for all computers that are not servers. The second line simply invokes the normal windows command “ipconfig /flushdns” on all computers.

This technique could be used to run any command across all workstations. Very powerful, and dangerous. Use at your own risk!

Finding Expired User Accounts in AD and Resetting Their Passwords with PowerShell

June 2, 2014 at 4:01 pm

The Setup

I came into the office today and was bombarded with users not being able to access our TFS server. Now, before I get too far into this story, you have to understand: Technically I’m only responsible for client-facing infrastructure. However, over the years I’ve started wearing more of a devops hat because, apparently, I’m quite good at it. That means TFS is now largely my problem. Funny how that works, eh? Anyway, back to TFS.

There were a few odd things about this issue: the oddest being that some of our off-shore developers were having no problems and others just couldn’t get in. The users with issues also couldn’t access the web portal. We (at least me) hadn’t made any changes to TFS in about a month, so I started to investigate.

After a brief panic about SharePoint not being installed properly (Hey, I didn’t set up this system, I’m just its current keeper) I managed to trace the issue to network logons. Thank you Security log! Wait, what’s this? Turns out many, many users recently had their accounts marked as expired… Turns out we just implemented mandatory password rotation and guess what? Today – 90 days was the day that a large batch of offshore development accounts were created! So now I had to reset credentials on 35+ accounts, and I’ll be damned if I’m going to do that manually!

Enter PowerShell!

List all accounts in an OU that have expired passwords

Get-ADUser -searchbase "ou=contractors,dc=example,dc=com" -filter {Enabled -eq $True} -Prop PasswordExpired | Where {$_.PasswordExpired } |select-object -property SAMAccountName,Name,PasswordExpired |format-table

Get-ADUser

SearchBase tells the Get-ADUser command to limit the search to a specific OU. This is very handy since I only have admin access to the one OU anyway. I filtered only for enabled accounts since trying to filter on PasswordExpired here didn’t work for some reason. I also explicitly called out the PasswordExpired property.  This output was piped to the where-object commandlet.

Where-Object

This was where I filtered on the current object group. Since passwordExpired is a bool, no fanciness needed here. Then I piped the output to Select-Object.

Select-Object

I only cared about some specific data for the output. I used this to select the properties I needed. Finally, I piped to Format-Table to make everything display nicely.

Reset passwords for accounts in an OU with expired passwords

Get-ADUser -searchbase "ou=contractors,dc=example,dc=com" -filter {Enabled -eq $True} -Prop PasswordExpired | Where {$_.PasswordExpired } | ForEach-Object {Set-ADAccountPassword -Identity $_.SAMAccountName -NewPassword (ConvertTo-SecureString -AsPlainText "Changeme1" -Force) }

Get-ADUser & Where-Object

These are the same as in the section above. We are filtering for enabled accounts in the contractors OU. This was piped to one of my favorite commands on earth: ForEach-Object.

ForEach-Object

This is, hands down, one of the handiest commands in PowerShell. Or any language for that matter. In this particular instance, we are running the Set-ADAccountPassword option for each object that we pass in. We pass the object’s SAMAccountName as the identity. We then create a new secure string password and pass that to -NewPassword. Then you hit enter and the magic runs!