Total Meltdown

The Meltdown / Spectre saga continues. Ulf Frisk just posted a description of a vulnerability he has coined “Total Meltdown”. It seems that Microsoft developers introduced an even worse vulnerability while fixing the Meltdown vulnerability in Windows 7 and Windows 2008 Server R2. With this broken Meltdown “fix” installed, any program can read or write any word in any other program’s memory, or the kernel’s memory for that matter, just by reaching out and touching – no special tricks required. The cure is worse than the disease.

Microsoft will be in for harsh criticism on this, not just because of the botched fix. It turns out Microsoft neglected to tell the world about this problem when they issued the fix in the March 13 update for Windows 7 and 2008. End users had no idea how seriously exposed their systems were, and how urgently they did (and still do) need to install the latest security update for the affected platforms.

For control system owners and operators, it gets even worse. Industrial control systems still have a lot of Windows 7 and 2008 systems running and they need those operating systems to run correctly. The March 13 update is known to break network communications for some machines that use static IP addresses. Almost all control system machines have static IP addresses. Owners and operators will need to test the latest security update thoroughly before applying it, to make sure their control systems still work.

Pretty much all this nonsense was predictable. The second law of SCADA security states that all software has bugs, some bugs are security vulnerabilities, and so all software can be hacked. Madly applying security updates the moment they are available is dangerous – the cure may be worse than the disease. In the best case the updates only fix known vulnerabilities, leaving unknown vulnerabilities still to be discovered. Our enemies know how to find new vulnerabilities and do so routinely. In the worst case, the patches introduce new and more serious vulnerabilities and other defects.

Take Meltdown – I just came from a CUUG presentation by Theo De Raadt, a security guru, a fellow Calgarian and the founder and leader of the OpenBSD and OpenSSH projects. We were talking about Meltdown and he pointed out that in January, when Meltdown was disclosed, he went on record observing that “Suddenly the trickiest parts of a kernel need to do backflips to cope with problems deep in the micro-architecture [of the CPU]” and predicted that “Decades old trap/fault software is being replaced in 10-20 operating systems, and there are going to be mistakes made.” Microsoft just got caught making one of those mistakes, a big one.

Theo’s comments this evening? “There will be more mistakes – just watch.”

“Quick, Patch Everything!”
What advice will we see on this matter for control system owners and operators from the usual IT-focused pundits? We will see the usual: “Quick, patch everything!” and “I know patching is costly and dangerous – suck it up, do it anyways.”

Thanks guys.

Waterfall’s CEO and Co-Founder, Lior Frenkel, had a presentation in the (free) DHS ICSJWG 2016 spring meeting entitled “Stop Patching, It’s Stupid.” The presentation drew a standing-room-only crowd. Patching industrial control systems is difficult, costly, and sometimes physically dangerous. When the cure is worse than the disease, very bad things can happen. That said, when industrial networks are reachable from Internet-exposed and IT Networks, be they directly or indirectly reachable, then this difficult, costly and dangerous patching exercise is essential. But when industrial networks are thoroughly protected from contamination by IT networks, patching turns into something that, yes, we do, eventually. There is no need for the dangerous, mad panic that is characteristic of IT-exposed networks.

Thorough Control System Protection
What do thoroughly-protected SCADA/control system networks look like? Well the third law of SCADA security states that all cyber attacks are information, and every single bit of information can be an attack. To prevent attacks from reaching our SCADA and control system networks we need to thoroughly control the flow of information into those networks. Very briefly, at Waterfall we see our customers doing all of the following:
  • They deploy Waterfall’s Unidirectional Security Gateways that permit information to flow transparently out of important networks, but physically prevent any information or attacks from flowing back in. Firewalls are software after all, so our customers forbid firewalls between important control networks and untrustworthy IT-exposed networks.
  • Our customers deploy physical security measures sufficient to prevent USB drives and other removable media and laptops from being carried into and connected to important networks.
  • They test absolutely everything that comes back into critical networks for weeks on isolated test beds to make sure there are no adverse impacts to safe or reliable physical operations. When they must carry new software into the control system, that software comes from the proven test bed, not from an untrustworthy, IT-exposed machines.
  • Our customers thoroughly inspect and test even brand-new computer hardware and software on those same test beds to gain confidence that the technology has not been tampered with during the procurement process.
  • And our customers carry out all the usual measures to screen employees and vendors who are trusted to come physically close to the physical, industrial process and the control computers.
None of this is news. All of it is documented in my 2016 book SCADA Security – What’s broken and how to fix it. My next book will have even more detail – hopefully available this fall.

In January of this year I pointed out that Waterfall has offered to make copies of SCADA Security available free of charge to owners, operators, security practitioners, educators and the press for the duration of the Meltdown/Spectre emergency.

The emergency continues. The offer is still open.

1 comment:

  1. The problem is compounded in small to medium facilities that do not have the resources (time, money and/or expertise) to set up test beds to vet patches. They are reduced to patch and hope or not-patch and hope. In either case 'hope' is not a stong cybrsecurity tool.