10 Years Since Ghostnet



On March 28, 2009 the Citizen Lab released “Tracking GhostNet“. So much has changed since then, both for me personally as well as the research community, the industry and the threat landscape itself.

It has been a long time since I updated this blog, in fact, the last entry was at the end of 2010. The “writing” page has largely been kept up to date with the major papers I’ve contributed to and I continued publicly blogging from 2011 – 2013 at Trend Micro and and at FireEye since then. I’m not really totally sure why I stopped blogging here, but after seeing Ron Deibert and some of my old Citizen Lab colleagues the other day — and we realized that it has literally been 10 years since GhostNet – I’m feeling a bit inspired.

Ron Deibert covered it in Black Code, but I remember crunching through pcaps with Greg Walton, the ones he collected from the Dalai Lama’s Office and other locations. We spotted all the Enfal stuff quickly and eventually we found the beacons for the malware (we probably should have named it :)) which lead to “GhostNet”.

After a little bit of the infamous Google searching…

… all you had to do was visit “/Serverlist.php” on any of the C2 servers (which were obtained from analyzing additional malware samples) and you could see panel.

Soon, Google (2010) would reveal that it had been compromised in what became known as Operation Aurora and “APT” and “Cyber Kill Chain” soon become mainstream. There was an increasing focus on a lot of cyberespionage groups, and on Comment Crew in particular with the notable releases of McAfee’s Shady RAT report (2011) and eventually Mandiant’s blockbuster APT1 report (2013).

Producing public technical papers detailing cyber-espionage activity became a fairly regular occurrence. I documented a lot of the research that influenced me during that time frame in these posts:

Looking Back

Looking back, I think there’s some things we got right with GhostNet, but some that definitely could have been done better.

My biggest regret is that we should have been crystal clear from the outset that there was no “hack back” or anything like that. I spent the next few years trying to clarify what had happened.

I think we did a good job of referencing prior work, in particular the work of Maarten Van Horenbeeck (which had a big impact on me, thanks for the heads-up Oxblood!) and Mikko Hyppönen and the folks at F-Secure.

There were two analyses of the GhostNet malware that I included in the footnotes of the report, but had to be redacted because the command and control servers were still up (and cached in Google) allowing anyone to grab all the victim data:

I regret not reaching out to them, as well as others, and working in a more collaborative way with the broader targeted threats research community. I think this would have really helped in other areas that I think we could have done better:

  • My malware analysis skills were pretty rudimentary at that point (in fact I would still say that I’m not that good and I’m learning from the amazing people I work with all the time).
  • I should have better understood and explained that there were multiple, separate attackers on the same box. Not doing so caused a lot of confusion between what was GhostNet and what were clusters of Enfal activity.
  • We could have handled victim notification better. I think being connected to the research community would have really helped. And we did learn from this, it was great to work with Shadowserver and Steven Adair on the next report.

One of the areas that I think we focused on, but that did always get the attention it deserved, was the importance of field work. This was our version of incident response engagements. Gaining an understanding — even if rudimentary — of the context of what happened in a particular incident, what the attackers did post-compromise and why certain data was stolen, which specific victims were targeted/compromised is extremely important. Greg Walton’s role here cannot be understated.

Finally, I think we handled attribution in a responsible way. We assessed the data that we had and explored alternative scenarios. We discussed freelancing, third-party actors, tacit state-encouragement and the possibility of false flags. We expressed an element of confidence in our suggestion that the “evidence tilts the strongest” toward Chinese state involvement.

Looking back I think the report withstands the test of time.

Looking Forward

Over the years I think there has been a certain level of APT fatigue. The research community broadened and we all began looking at the same things and rushing to publish first (myself included). There seemed to be a backlash in reaction to these reports ranging from “it’s all a bunch of marketing” to “it’s always China”.

Then there was the use of the APT label to deflect responsibility when compromises occurred. Simultaneously, the distinction between the all powerful APT and the lowly “commodity” malware emerged. I’ve never liked this distinction. Gh0st, PoisonIvy and many other publicly available malware families and utilities have been used by both cyberespionage and cybercrime actors of varied skillfulness. The same is true in the modern era with the usage of Red Team frameworks (Metasploit, Cobalt Strike, Powershell Empire) as well as a wide variety of RATs. Dismissing whole swaths of activity, is not probably the best security posture.

I’ve only been sporadically researching cyberespionage since about 2016, and I have largely focused on cybercrime. But I have been following the work of a lot of solid researchers, both new and old school, that are continuing to produce amazing research year after year.

To me, and correct me if I’m wrong, it seems like it’s even harder these days. These are not entirely new developments, but dealing with deliberate attempts by threat actors to mislead on attribution and sorting through the “badtribution” out there present challenges. In addition, I think we’ll see more throw away operations where the things we’re used to clustering on, like command and control servers, won’t be re-used thus reducing the hard overlaps available. And the use of large scale distribution that obscures the targeted nature of post-compromise activity — especially when there’s overlap between traditional cybercrime activity with what seems to be more targeted activity — can further complicate the ability to track and assess the motivations and capabilities of these actors.

Well, I’ll leave it at that, and hopefully I won’t wait years to post again :)

Post a comment.