Thoughts on Critical Infrastructure Protection



In a very interesting paper for the International Risk Governance Council, Ortwin Renn describes a framework that provides an “analytic structure for investigating and supporting the treatment of risk issues.” Renn argues that risks are “mental ‘constructions'” in which actors link signals from the real world with “experience of actual harm” along with “the belief that human action can prevent harm in advance.” Therefore, how actors come to understand threats and risks is critical for it affects risk governance. Renn defines “risk governance” as follows:

On a national scale, governance describes structures and processes for collective decision-making involving governmental and non-governmental actors (Nye and Donahue 2000). Governing choices in modern societies is seen as an interplay between governmental institutions, economic forces and civil society actors (such as NGOs). At the global level, governance embodies a horizontally organised structure of functional self-regulation encompassing state and non-state actors bringing about collectively binding decisions without superior authority (cf. Rosenau 1992; Wolf 2002).

Renn suggests that there are four phases of of risk governance: pre-assessment, appraisal, tolerability and acceptability judgement, and management. My interest at this point is in the pre-assessment phase. How do we come to understand the nature of cyber threats to critical infrastructure (CI) and how do we assess the risks it poses? Framing is a critical component of the pre-assessment phase for it determines what information is relevant, and how risk is perceived. It shapes how we come to understand threat, and indicates a direction of action.

Framing

Cyber threats, and in particular cyber threats relating to CI, have been largely framed in polar opposite extremes.

On one end of the continuum we have those who believe cyber threats to be nothing more than a nuisance. It is not uncommon to hear “our systems are well protected”, followed by “everyone gets a virus every now and then” and finally “well, our critical systems are firewalled, air-gapped, not connected to the Internet.” In this case threats are treated as “standalone” cases that are largely technical in nature.

From such a perspective, technical considerations take precedence over who might be behind the attacks, what the motivations of the attackers are, and the what the consequences of cyber intrusions are — for example, what could the attackers have done with the level of access obtained, or the documents stolen, and contacts harvested. This results in under estimated risk, de-prioritization, and sometimes even inaction.

On the other side you’ll hear that China/Russia/Hackers – whatever the flavour of the day is – have thoroughly infiltrated CI and can “shut down the electrical grid” or “crash airplanes into one another” or … insert catastrophe here.

From this perspective, political persuasions concerning attribution are prioritized over technical considerations. The scenarios used to illustrate the threat are often ill-conceived, and do not reflect the technical and operation environments they are meant to address. Rather than focus on “boring” details, this perspective seeks to lay blame on external sources. This results in over estimated risk, sometimes resulting in disbelief and therefore inaction, or when it does spur action, often focuses on the wrong threats.

For example, some often quote the case of the Australian sewage hacker. An attacker managed to compromise a waste management system near Brisbane, Australia and intentionally caused millions of litres of raw sewage to spill out over the suburb of Sunshine Coast. Many, including the WSJ, suggest, “what if hackers located in China or Russia were able to conduct attacks like this here?”

Well, it turns out that the “hacker” in this case was an insider: he was “employed by the company that had installed the system” that he later “hacked” and he had specialized knowledge of the system. Moreover, the attack did not occur over the Internet, in fact the attacker issued commands over radio and had to be within a 25 mile radius – something that doesn’t apply to hackers operating out of China or Russia. Security solutions focusing solely on external threats would not have protected this installation in this case.

Re-framing the problem reveals a much more complex threat landscape. CI faces “integrated threats” which encompass the intersection of cyberspace and “meatspace”. The “cyber” threat is not purely digital. Nor is the threat limited to only emergencies or catastrophic events.

CI encompasses the private and public sectors and is increasingly reliant on Internet connectivity (in some capacity). Government relies on networks operated by private firms, which contract with other private firms and so on. “Ownership” of the cyber security process is distributed across all the entities responsible for the setup and installation of these systems, through to operations and maintenance.

In addition, individual operations are reliant on others – something that the Northeast Blackout of 2003 demonstrated so vividly. The operations of Canadian CI were negatively impacted due to the operations of, in this case a foreign, operation.

This same lesson applies to cyberspace.

Therefore defence needs to be conceptualized not just in terms of firewalls and IDS but also the security of the operations at each stage (computer security, software security, the tools used during installation, remote access for maintenance, and connection from “trusted” operators) as well as the “insider” threat.

The good thing is that this re-framing is occurring. There are now a variety of public documents that serve as an early warning of the potential threat CI faces – I’ll briefly discuss two of them:

1. A report by the Department of Transportation’s Inspector General that documents a variety of attacks against and vulnerabilities in the FAA’s air traffic control system (May 2009)

2. A report commissioned by the Department of Energy investigating common cyber security vulnerabilities control systems (November 2008)

Early Warning

The DOE report documented successful attacks that have affected FAA networks . In 2006 the FAA shutdown a “portion of its ATC systems in Alaska” due to a “viral attack” and in 2008 FAA computers, again in Alaska, were compromised and 40,000 username and passwords were stolen. In 2009 a “a FAA public-facing Web application computer” was compromised leading to the theft of “PII on 48,000 current and former FAA employees.”

Vulnerabilities were found during an audit in various web-applications that would have allowed attackers to access the data stored on those computers – this included public facing systems such as those which list “communications frequencies for pilots and controllers” as well as internal systems used by the FAA:

  • Unauthorized access was gained to information stored on Web application computers associated with the Traffic Flow Management Infrastructure System, Juneau Aviation Weather System, and the Albuquerque Air Traffic Control Tower;
  • Unauthorized access was gained to an ATC system used to monitor critical power supply at six en route centers; and
  • Vulnerability found on Web applications associated with the Traffic Flow Management Infrastructure system was confirmed, which could allow attackers to install malicious codes on FAA users’ computers.

Accoring to the report, “[t]his occurred because (1) Web applications were not adequately configured to prevent unauthorized access and (2) Web application software with known vulnerabilities was not corrected in a timely matter by installing readily available security software patches released to the public by software vendors.”

The report on common cyber security vulnerabilities in control systems for the DOE identified similar issues along with serious issues concerning the use of plain text communications protocols and the lack of security surrounding remote access systems. The report found:

“If compromised, an attacker could utilize these systems to cause catastrophic damage or outages directly or by exploiting paths to critical end devices or connected SCADA systems.”

Typically the network environment is divided into a “business LAN” and a “control system LAN” with a firewall in between. Sometimes, a DMZ, is created to share data between the corporate and control system LANs.

The report found that:

  • Firewall and router filtering deficiencies include access to control system components through external and internal networks. (Unrestricted telnet access allowed to DMZ network equipment and misconfigured VPNs and that remote desktop passwords were common between security zones (corporate and control system networks)
  • It was possible to escalate privileges from a non-control system application remotely published by the display application to a control system application.
  • A malicious user who has physical access to an unsecured port on a network switch could plug into the network behind the firewall to defeat its incoming filtering protection.

A very interesting theme throughout the report was the focus on remote, trusted endpoints. The report found that the Inter-Control Center Communications Protocol (ICCP), “an open protocol used internationally in the electric power industry to exchange data among utilities, regional transmission operators, independent power producers, and others” uses plain text and that such connections should be treated as “untrustworthy” and placed in a separate DMZ.

In other words, operators within the industry treat remote connections between them as trustworthy, bypassing the security procedures in place. This means that even if your operation is relatively secure, an attacker may be able to bypass it by compromising a less secure peer.

Determining risk in cyberspace is difficult. Attacks occur everyday.

Attackers may be highly skilled and well resourced adversaries or simply opportunistic amateurs. Some are professional cyber criminals, others are motivated by politics or status within their community. Still others may be engaged in espionage or data theft and have ties to state governments. Attacks may be largely symbolic, intended to intimidate, or they may aim to cause disruption or destruction.

An attack that may seem insignificant may have much larger consequences.

Knowing the degree of risk posed by attackers — ascertaining the “who” and “why” — is critical for mounting an effective response. To be clear, understanding why an attack occurred should not be used as an excuse such as “why would anyone attack poor old me” to limit or restrict corrective measures. Rather, it is used to situate the attack in a broader perspective which may indicate why the target was chosen and what the attacks may aim to do with the information/data they have extracted.

Understanding a single attack is only one component of establishing a complete threat picture.

In order to develop a better understanding of the rapidly changing risks and threats in cyberspace, ongoing monitoring and analysis is required. Rather than a static assessment, or a singular incident response, such threat mapping is better conceptualized as an iterative interrogation process in which old and new data are examined for meaningful relationships and new evidence.

Cyber threats to CI exist, don’t get me wrong, but the the emphasis need not be on an unlikely catastrophic event like “cyber-Katrina” or a “cyber-911” or a “digital-Pearl Harbor.” There are numerous vulnerabilities to be exploited, they probably are being exploited.

The good news is that they are often remedied through the implementation of best security practices. The bad news is that “boring” security concerns do not capture the imagination of policy makers and bureaucrats responsible for committing resources to fix security issues.

Still, this is not an excuse to conjure up fiction, even if the goal is to spur corrective action in the right direction.

We need to find the right framing that captures the attention of policy makers but that accurately reflects the threats and vulnerabilities to CI.

We need to change the perception of security as something that’s brought in to “fix” an emergency or as a response to catastrophe. It needs to be part of the development, implementation, and operation of CI. Considering the sorry state of affairs, I do think that scenarios can be useful tools to help policy makers understand the nature of the threat. However, they need to be realistic – they need to reflect the operational environment of CI. If they are just hype they are in the best case just useless and in the worse case actually a detriment.

One comment.

  1. […] Thoughts on Critical Infrastructure Protection – Nart Villeneuve […]

Post a comment.