Posts tagged “Critical Infrastructure”

Thoughts on Critical Infrastructure Protection



In a very interesting paper for the International Risk Governance Council, Ortwin Renn describes a framework that provides an “analytic structure for investigating and supporting the treatment of risk issues.” Renn argues that risks are “mental ‘constructions’” in which actors link signals from the real world with “experience of actual harm” along with “the belief that human action can prevent harm in advance.” Therefore, how actors come to understand threats and risks is critical for it affects risk governance. Renn defines “risk governance” as follows:

On a national scale, governance describes structures and processes for collective decision-making involving governmental and non-governmental actors (Nye and Donahue 2000). Governing choices in modern societies is seen as an interplay between governmental institutions, economic forces and civil society actors (such as NGOs). At the global level, governance embodies a horizontally organised structure of functional self-regulation encompassing state and non-state actors bringing about collectively binding decisions without superior authority (cf. Rosenau 1992; Wolf 2002).

Renn suggests that there are four phases of of risk governance: pre-assessment, appraisal, tolerability and acceptability judgement, and management. My interest at this point is in the pre-assessment phase. How do we come to understand the nature of cyber threats to critical infrastructure (CI) and how do we assess the risks it poses? Framing is a critical component of the pre-assessment phase for it determines what information is relevant, and how risk is perceived. It shapes how we come to understand threat, and indicates a direction of action.

Framing

Cyber threats, and in particular cyber threats relating to CI, have been largely framed in polar opposite extremes.

On one end of the continuum we have those who believe cyber threats to be nothing more than a nuisance. It is not uncommon to hear “our systems are well protected”, followed by “everyone gets a virus every now and then” and finally “well, our critical systems are firewalled, air-gapped, not connected to the Internet.” In this case threats are treated as “standalone” cases that are largely technical in nature.

From such a perspective, technical considerations take precedence over who might be behind the attacks, what the motivations of the attackers are, and the what the consequences of cyber intrusions are — for example, what could the attackers have done with the level of access obtained, or the documents stolen, and contacts harvested. This results in under estimated risk, de-prioritization, and sometimes even inaction.

On the other side you’ll hear that China/Russia/Hackers – whatever the flavour of the day is – have thoroughly infiltrated CI and can “shut down the electrical grid” or “crash airplanes into one another” or … insert catastrophe here.

From this perspective, political persuasions concerning attribution are prioritized over technical considerations. The scenarios used to illustrate the threat are often ill-conceived, and do not reflect the technical and operation environments they are meant to address. Rather than focus on “boring” details, this perspective seeks to lay blame on external sources. This results in over estimated risk, sometimes resulting in disbelief and therefore inaction, or when it does spur action, often focuses on the wrong threats.

For example, some often quote the case of the Australian sewage hacker. An attacker managed to compromise a waste management system near Brisbane, Australia and intentionally caused millions of litres of raw sewage to spill out over the suburb of Sunshine Coast. Many, including the WSJ, suggest, “what if hackers located in China or Russia were able to conduct attacks like this here?”

Well, it turns out that the “hacker” in this case was an insider: he was “employed by the company that had installed the system” that he later “hacked” and he had specialized knowledge of the system. Moreover, the attack did not occur over the Internet, in fact the attacker issued commands over radio and had to be within a 25 mile radius – something that doesn’t apply to hackers operating out of China or Russia. Security solutions focusing solely on external threats would not have protected this installation in this case.

Re-framing the problem reveals a much more complex threat landscape. CI faces “integrated threats” which encompass the intersection of cyberspace and “meatspace”. The “cyber” threat is not purely digital. Nor is the threat limited to only emergencies or catastrophic events.

CI encompasses the private and public sectors and is increasingly reliant on Internet connectivity (in some capacity). Government relies on networks operated by private firms, which contract with other private firms and so on. “Ownership” of the cyber security process is distributed across all the entities responsible for the setup and installation of these systems, through to operations and maintenance.

In addition, individual operations are reliant on others – something that the Northeast Blackout of 2003 demonstrated so vividly. The operations of Canadian CI were negatively impacted due to the operations of, in this case a foreign, operation.

This same lesson applies to cyberspace.

Therefore defence needs to be conceptualized not just in terms of firewalls and IDS but also the security of the operations at each stage (computer security, software security, the tools used during installation, remote access for maintenance, and connection from “trusted” operators) as well as the “insider” threat.

The good thing is that this re-framing is occurring. There are now a variety of public documents that serve as an early warning of the potential threat CI faces – I’ll briefly discuss two of them:

1. A report by the Department of Transportation’s Inspector General that documents a variety of attacks against and vulnerabilities in the FAA’s air traffic control system (May 2009)

2. A report commissioned by the Department of Energy investigating common cyber security vulnerabilities control systems (November 2008)

Early Warning

The DOE report documented successful attacks that have affected FAA networks . In 2006 the FAA shutdown a “portion of its ATC systems in Alaska” due to a “viral attack” and in 2008 FAA computers, again in Alaska, were compromised and 40,000 username and passwords were stolen. In 2009 a “a FAA public-facing Web application computer” was compromised leading to the theft of “PII on 48,000 current and former FAA employees.”

Vulnerabilities were found during an audit in various web-applications that would have allowed attackers to access the data stored on those computers – this included public facing systems such as those which list “communications frequencies for pilots and controllers” as well as internal systems used by the FAA:

  • Unauthorized access was gained to information stored on Web application computers associated with the Traffic Flow Management Infrastructure System, Juneau Aviation Weather System, and the Albuquerque Air Traffic Control Tower;
  • Unauthorized access was gained to an ATC system used to monitor critical power supply at six en route centers; and
  • Vulnerability found on Web applications associated with the Traffic Flow Management Infrastructure system was confirmed, which could allow attackers to install malicious codes on FAA users’ computers.

Accoring to the report, “[t]his occurred because (1) Web applications were not adequately configured to prevent unauthorized access and (2) Web application software with known vulnerabilities was not corrected in a timely matter by installing readily available security software patches released to the public by software vendors.”

The report on common cyber security vulnerabilities in control systems for the DOE identified similar issues along with serious issues concerning the use of plain text communications protocols and the lack of security surrounding remote access systems. The report found:

“If compromised, an attacker could utilize these systems to cause catastrophic damage or outages directly or by exploiting paths to critical end devices or connected SCADA systems.”

Typically the network environment is divided into a “business LAN” and a “control system LAN” with a firewall in between. Sometimes, a DMZ, is created to share data between the corporate and control system LANs.

The report found that:

  • Firewall and router filtering deficiencies include access to control system components through external and internal networks. (Unrestricted telnet access allowed to DMZ network equipment and misconfigured VPNs and that remote desktop passwords were common between security zones (corporate and control system networks)
  • It was possible to escalate privileges from a non-control system application remotely published by the display application to a control system application.
  • A malicious user who has physical access to an unsecured port on a network switch could plug into the network behind the firewall to defeat its incoming filtering protection.

A very interesting theme throughout the report was the focus on remote, trusted endpoints. The report found that the Inter-Control Center Communications Protocol (ICCP), “an open protocol used internationally in the electric power industry to exchange data among utilities, regional transmission operators, independent power producers, and others” uses plain text and that such connections should be treated as “untrustworthy” and placed in a separate DMZ.

In other words, operators within the industry treat remote connections between them as trustworthy, bypassing the security procedures in place. This means that even if your operation is relatively secure, an attacker may be able to bypass it by compromising a less secure peer.

Determining risk in cyberspace is difficult. Attacks occur everyday.

Attackers may be highly skilled and well resourced adversaries or simply opportunistic amateurs. Some are professional cyber criminals, others are motivated by politics or status within their community. Still others may be engaged in espionage or data theft and have ties to state governments. Attacks may be largely symbolic, intended to intimidate, or they may aim to cause disruption or destruction.

An attack that may seem insignificant may have much larger consequences.

Knowing the degree of risk posed by attackers — ascertaining the “who” and “why” — is critical for mounting an effective response. To be clear, understanding why an attack occurred should not be used as an excuse such as “why would anyone attack poor old me” to limit or restrict corrective measures. Rather, it is used to situate the attack in a broader perspective which may indicate why the target was chosen and what the attacks may aim to do with the information/data they have extracted.

Understanding a single attack is only one component of establishing a complete threat picture.

In order to develop a better understanding of the rapidly changing risks and threats in cyberspace, ongoing monitoring and analysis is required. Rather than a static assessment, or a singular incident response, such threat mapping is better conceptualized as an iterative interrogation process in which old and new data are examined for meaningful relationships and new evidence.

Cyber threats to CI exist, don’t get me wrong, but the the emphasis need not be on an unlikely catastrophic event like “cyber-Katrina” or a “cyber-911” or a “digital-Pearl Harbor.” There are numerous vulnerabilities to be exploited, they probably are being exploited.

The good news is that they are often remedied through the implementation of best security practices. The bad news is that “boring” security concerns do not capture the imagination of policy makers and bureaucrats responsible for committing resources to fix security issues.

Still, this is not an excuse to conjure up fiction, even if the goal is to spur corrective action in the right direction.

We need to find the right framing that captures the attention of policy makers but that accurately reflects the threats and vulnerabilities to CI.

We need to change the perception of security as something that’s brought in to “fix” an emergency or as a response to catastrophe. It needs to be part of the development, implementation, and operation of CI. Considering the sorry state of affairs, I do think that scenarios can be useful tools to help policy makers understand the nature of the threat. However, they need to be realistic – they need to reflect the operational environment of CI. If they are just hype they are in the best case just useless and in the worse case actually a detriment.

When Hype is the Threat Part 2



Recently, Jim Harper, Director of Information Policy Studies at the CATO Institute, stated that “both cyber terrorism and cyber warfare are concepts that are gross exaggerations of what’s possible through Internet attacks,” and it rubbed some the wrong way. But the overall point he was making is somewhat lost when focusing on this quote alone. He also said:

the real problems are those worms, those scripts, those denial of service attacks … they are serious and we have to take care of them, but there isn’t a strategic advantage to be gained by cyberwarfare… we can be inconvenienced, it can be costly, so we do have to secure ourselves, but we are not going to be at cyberwar and we are not going to suffer cyberterrorism.

He’s not suggesting that there aren’t real threats just that the conceptual vehicles of cyberterrorism and cyberwar might not be that helpful. I’d suggest it is somewhat akin to the simple metaphors of “War on X” such as ‘War on Drugs“, “GWOT” that are used to invoke responses such as fear and suggest courses of action that would other wise be reserved for a state of emergency such as, well, war. Even GWOT has now been replaced with “Overseas Contingency Operation.”

But there is more. In “The War Metaphor in Public Policy” James Childress states:

We have to ask of each use of war as a metaphor: Does it generate insights or does it obscure what is going on and what should be done?

The metaphor of cyberwar invites us to find cases of intrusions and disruptions and layer on political context and significance which usually takes the form of “what if the (russians|chinese|terrorists) did it”? I believe that this leads us to inaccurately assess the nature of the threats we aim to counter. In a classic 1997 case a teenager in the U.S. disabled “vital services” to a Worcester, MA air traffic control tower for 6 hours. Telephone service was disrupted as was a “circuit which enables aircraft to send an electric signal to activate the runway lights on approach.”

The “TRADOC G2 Handbook No 1 02 – Cyber Operations and Cyber Terrorism” uses this example in an attempt to illustrate the potential of cyber-terrorism. But left out of the document and analysis is the reason this attack succeeded and how it could be defended.

How did this happen?

[T]he loop carrier systems operated by the telephone company were accessible from a personal computer’s modem. This accessibility was maintained so that telephone company technicians could change and repair the service provided to customers by these loop carrier systems quickly and efficiently from remote computers.

Bell Atlantic left access to a “critical” system wide open. Instead of being reprimanded, they were congratulated:

Our critical infrastructure is safer because of Bell Atlantic’s intolerance of the intrusions it discovered into its network.

Focusing on “what if it was cyberterrorism” often leads us to ignore the source of the vulnerability in the first place. Attention is placed on the hypothetical rather than the real threat. And the recommended responses become disproportionate to the threat.

This, of course, doesn’t mean that there are not serious security concerns with the FAA and air traffic control. A recent presentation at Defcon explored the vulnerabilities of the air traffic control system and even the U.S. government has acknowledged such issues. A report by the Department of Transportation’s Inspector General documents a variety of attacks against and vulnerabilities in the FAA’s air traffic control system.

Vulnerabilities were found during an audit by KPMG in various web-applications that would have allowed attackers to access the data stored on those computers, and that as a result “internal FAA users (emphasis added) (employees, contractors, industry partners, etc.) could gain unauthorized access to ATC systems.” Successful attacks have also taken place. In 2006 the FAA shutdown a “portion of its ATC systems in Alaska” due to a “viral attack” and in 2008 FAA computers, again in Alaska, were compromised and 40,000 username and passwords were stolen. In 2009 a “an FAA public-facing Web application computer” was compromised leading to the theft of “PII on 48,000 current and former FAA employees.”

So how did the Washington Post report this?

Tom Kellermann, a vice president at Core Security Technologies, a cybersecurity company, likened the threats cited by the report to the television show “24″ in which terrorists hack into and commandeer the FAA’s air-traffic control system to crash planes. “The integrity of the data on which ground control is relying can be manipulated, much as seen in ’24,’” he said.

But what the report actually found was:

(1) Web applications were not adequately configured to prevent unauthorized access and (2) Web application software with known vulnerabilities was not corrected in a timely manner by installing readily available security software patches released to the public by software vendors.

Basic security.

War, Childress argues, is “exceptional activity that can be justified only under exceptional circumstances and, even then, should be fought within appropriate moral limits.” He suggests that when we use the imagery of war to illuminate policy debates we “we often forget the moral reality of war”:

Among other lapses, we forget important moral limits in real war—both limited objectives and limited means. In short, we forget the just-war tradition, with its moral conditions for resorting to and waging war. We are tempted by seedy realism, with its doctrine that might makes right, or we are tempted by an equally dangerous mentality of crusade or holy war, with its doctrine that right makes might of any kind acceptable. In either case, we neglect such constraints as right intention, discrimination, and proportionality, which protect the humanity of all parties in war.

Instead of focusing on securing networks (boring) the emphasis moves to counter attack (sexy).

In response to the recent DDOS attacks aimed at several South Korean and U.S. government websites Rep. Peter Hoekstra (R-Mich) suggested that there should be retaliation against North Korea even though most experts believe that there is no connection between North Korea and the DDOS attacks. This line of thought is actually fairly well developed. In “Carpet bombing in cyberspace” Col. Charles Williamson argued that:

America needs a network that can project power by building an af.mil robot network (botnet) that can direct such massive amounts of traffic to target computers that they can no longer communicate and become no more useful to our adversaries than hunks of metal and plastic.

Luckily, this is generally seen as a bad idea. A recent NY Times article investigates some of the restrains on the use of cyber attacks due to the collateral damage they produce as well as the unintended consequences.

Related:

When Hype is the Threat



Articles like this are very irritating. They are short of detail and long on hype. And when that hype focuses on the wrong threat, it becomes the threat itself.

This WSJ article is a typical case. These stories are not new and the pop up from time to time usually focused on Russian or Chinese hackers — and in this case some unholy alliance of both (I’m surprised that Al Qaeda wasn’t thrown in to this “Haxis of Evil” :)) Some have suggested that the article was planted for political purposes but, regardless, the hype seems to focus on the wrong threat.

Since there are no details in the article the author attempts to use an example to hype the threat: the infamous Australian sewage example. However, this just proves the overall uselessness of the entire article.

First, the “sewage hacker” case was an inside job. The attacker was “employed by the company that had installed the system” that he later “hacked”. Second, he had specialized knowledge of the system (related to his “insider” status):

After a brief police pursuit from the Sunshine Coast towards Brisbane, Boden was run off the road. In his car was the specialized proprietary SCADA equipment he had used to attack the system, and a laptop; however, it was a piece of $18 cable that ultimately led to his downfall.

Grounds for charges were slim, but the handmade cable showed he had the technical capability to hack the Scada system.

The laptop found in his car contained enough messages to prove he sent commands to disrupt various pump stations and that, combined with proprietary radio equipment and specialized cable, was enough to find him guilty of what has been dubbed the first case of critical infrastructure hacking in Australia.

Third, the attack did not occur over the Internet.

“We worked out he had to be within a 25-mile radius, but one night we had not seen any evidence of hacking until he came on about 6.30 a.m. We had private investigators put cars along all the bridges and overpasses from the Sunshine Coast to Brisbane, because we knew the description of his car and knew he would be driving past. The investigators waited until they saw him on the highway and contacted police to intercept the car.

“When police went to intercept him, he did a runner; the police then ran him off the road and found a car full of proprietary gear. No one had seen him hack our systems, but from his laptop we were able to find the last recorded event and messages sent which exactly matched our SCADA radio monitoring systems.”

So, following the logic in the WSJ article the Chinese and/or Russian hackers would have to drive (can you do that over the Internet?) to within 25 miles of their targets — after having previously been employed by them — in order to conduct their attacks.

Now, the point here is not to diminish the threat of attack against critical infrastructure but to point out that the hype-based approach ends up bringing focus on the wrong kinds of threats. By focusing on external Internet-based threats (that may or not really exist) the focus on the insider threat is lost.

In many cases the insider threat is of more importance than an external, Internet-based threat (especially when such systems are *not* connected to the Internet). A recent case concerning an oil platform is yet another example:

A Los Angeles federal grand jury indicted a disgruntled tech employee Tuesday on allegations of temporarily disabling a computer system detecting pipeline leaks for three oil derricks off the Southern California coast.

In an old Gartner exercise, a team was given $200 million, access to state-level intelligence, and five years to plan attacks. Even though this study is old, I like it because the scenario gives the attackers significant resources as opposed to many that simply rely on “hackers” from X or Y countries. They also divided the team into various groups focusing on different parts of critical infrastructure.

The telecommunications disruption team team suggested that requirements for a successful attack would include working knowledge of telecommunications systems, PHD level education, specific product knowledge of targets and insider assistance. They suggested that it would have large resource requirements and be fairly expensive. As can be seen in an overview by The Register, bribes and insiders play an important role:

With that said, it’s nevertheless clear that a fair amount of mischief can be brought about by a large, well-funded technical dream-team. Telecomms group member Fraley reported that it’s possible to cause SS-7 (Common Channel Signaling System #7) and PSTN (Public Switched Telephone Network) capacity to collapse for a brief period. However, it would take a very large investment in both personnel and money (bribes, presumably) to accomplish even that much. Perhaps 200 people would be needed, he reckoned. A satchel bomb thrown down a manhole in Manhattan would be far easier, far cheaper, and still fairly destructive, he remarked.

In fact there was a case just recently in which attackers “killed landlines, cell phones and Internet service for tens of thousands of people” and “froze operations in parts of the three counties at hospitals, stores, banks and police and fire departments that rely on 911 calls, computerized medical records, ATMs and credit and debit cards.” How? By cutting the fiber optic cables (which would be hard to do via the Internet in Russia or China).

Insiders are also required to exploit SCADA systems:

As for the power grid, it’s national, and controlled by large, complex SCADA (Supervisory Control and Data Acquisition) systems. Still, it’s only feasible to target a large metropolitan area, team member John Dubiel noted. Attacking the entire grid would be quite impractical. The best approach would be physical attacks on major transmission corridors, all of which are well-known, followed by the malicious use of owned control systems to to create a pattern of cascading failures throughout the target region. “At this point the system is attacking itself,” he observed. Finally, one would attack and damage the SCADA systems themselves to hamper recovery efforts.

It’s possible to launch remote attacks against some SCADA systems connected to public infrastructure, but insiders would have to be recruited to attack others, he added.

In many cases the focus on protecting critical infrastructure needs to be placed on the physical infrastructure, the “insider threat” and very often on *basic* Internet security practices. (Such as changing default passwords). When the emphasis shifts away from such threats to focus on hype and hazy allegations that may or may not be politically motivated the hype itself becomes the threat. Rather than deal with emerging security problems the emphasis is placed on building a “cyber-Maginot Line” without an accurate articulation of the nature of the threat.

New Cyber Security Task Force in Canada?



Michael Geist reports that the Canada may be creating a Cyber Security Task Force. Although the Ministry of Public Security and Emergency Preparedness Canada has not announced it the Government Electronic Directory Service now lists a position for a Cyber Security Task Force Secretariat. Geist rasises some key issues including:

First, who will be on the task force? It is essential it include representation from privacy and civil liberties groups. Security is critical but must be imbued with full respect for the privacy and civil liberty rights of all Canadians. Revelations of widespread telephone communications surveillance in the United States — frequently with the secret participation of telecommunications firms — has provided evidence of the danger of focusing on security without counterbalancing with a privacy and civil liberties perspective.

Second, what other legislation could be introduced in such an environment? With a cyber-security task force on the way, speculation will increase that the government is also preparing to bring back so-called “lawful access” legislation. Introduced by the Liberal government, the innocuous-sounding Modernization of Investigative Techniques Act envisioned a host of new legal powers associated with near-ubiquitous surveillance technologies.

As Geist notes, the proposed legislation concerning electronic surveillance requires ISP’s to “install new systems capable of capturing data and identifying specific subscriber activities” and lacks judicial oversight — it allows various law enforcement authorities to simply request subscriber data from ISPs without a warrant.