A quick introduction: by day, I'm a DevOps Engineer at Red Gate, a software company in Cambridge, UK. Outside of work, I enjoy both amateur radio (hence the callsign, M0VFC) and community broadcast radio at Cambridge 105. This blog aims to span all those interests - so feel free to ignore the posts that aren't relevant!
Feel free to get in touch on Twitter (@rmc47).
73 / Best wishes,
Last night I gave a short talk on how we use Puppet at Redgate, particularly in the context of our (mainly Windows-based) build system.
With the caveat that I'm definitely not an expert, and therefore don't treat any of the content as being guaranteed best practice (some of it really isn't!), here's the slides.
It was the second meeting of the DevOps Cambridge group, and the first one I've been to - a really nice crowd, and well worth going along if you're in the area.
After applying the latest set of Windows Updates to some of our EC2 instances, we saw some problems, including very high CPU utilisation by lsass.exe. The strange thing being, it only happened when that instance was listed in an EC2 Load Balancer's instance list - even if there was no traffic hitting that load balancer.
I spent some time this evening tracking it down, and it looks like there's a serious interaction between the patch for the Microsoft SChannel vulnerability and EC2's load balancer.
Here's the reproduction steps:
With that setup, run Wireshark on the instance. As soon as the instance is in service on the load balancer, you will see several Mbps of HTTPS handshake traffic between the two. Remove the instance from the load balancer, and after a short period, the traffic stops.
Now, uninstall KB2992611, reboot, add the instance back into the load balancer, and observe everything behaves as expected - only the occasional health checks and one or two handshakes.
By way of comparison, 10s of Wirecast capture with KB2992611 installed resulted in a 20MB capture file; 90s without it installed was only 53kB.
If you then manually download and reinstall KB2992611, the problem recurs, with very high traffic levels.
If the instance is not behind an EC2 Load Balancer, direct HTTPS access from Chrome appears to work fine, with no abnormal traffic observed.
My guess is it's a cipher negotiation issue, but at this point, my knowledge of TLS is inadequate to say any more...
Update: disabling TLS 1.2 works around the problem.
If you're affected, set the following registry key to disable TLS 1.2. This causes TLS 1.1 to be negotiated, which is successful:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server] "Enabled"=dword:00000000
It's probably better just to disable the affected ciphers though, as [now - 2014-11-17] recommended by Microsoft.
Update 2, 2014-11-12: Amazon have acknowledged the issue, and are on the case: http://aws.amazon.com/security/security-bulletins/ms14-066-advisory/
Tomorrow morning, I'll be flying off to Bermuda - callsign prefix VP9 - for just over a week, along with Martin G3ZAY, Michael G7VJR and Dom M0BLF.
We'll be active from Thursday 20th to Thursday 28th February, mostly on the higher HF bands, since we're near the top of the sunspot cycle. Bermuda is often active in contests, so we'll pay a good amount of attention to the WARC bands (12/17/30m). The power limit on the island is 150W, so no linear amplifiers are allowed. As a result, we'll probably do quite a bit of CW to make the most of the power we have available.
Equipment wise, we're taking three Elecraft K3 transceivers, and one Kenwood TS590. We're staying with Ed VP9GE, so should have access to his antennas, but we're also taking lightweight verticals and dipoles to augment these and allow us more stations on the air simultaneously.
QSL for VP9/M0VFC, VP9/M0BLF and VP9/G3ZAY should be using Club Log OQRS (preferred) or via our home callsigns. VP9/G7VJR should go via M0OXO. All logs will be uploaded to LoTW as soon as possible.