It doesn’t matter how careful they are, ‘super hackers’ will leave a trace of their activities. Windows event logs, DNS logs and DHCP logs – sources not normally analysed from an attack detection perspective – all hold clues to the clandestine presence of these invaders.

This is a view confirmed by Anton Chuvakin, Vice President at Gartner, who in a recent talk at the Gartner Security Management conference in Sydney, said: “Super hackers practically do not exist. They always leave trace.”

By applying context even a ‘super hack’ can be dissected to disclose the otherwise hidden clues.

Real world hunting – Duqu 2

Let’s apply some context and look at a ‘super hack’; the Duqu 2 attack on Kaspersky – an excellent example of advanced malware deployed by a Nation State. While this was no-doubt a very sophisticated piece of malware, deployed in a well-resourced and executed operation, there were components of the attack which would have enabled early detection.

According to the Kaspersky report, attackers installed an MSI from a temporary folder, which installed a service, and were also observed using task scheduler to initiate install. This left several artefacts which are all detectable with the right technologies.

In particular, service creation on an end-point is recorded in the Windows Event logs (code 4697), and likewise when a scheduled task is created (code 4698), both of which are infrequent events on most end points. There is also other forensic evidence such as use of the MSI installer in prefetch and lists of named pipes.

Additionally, endpoint threat detection and response software that has advanced live memory analysis capabilities could potentially detect other aspects of the malware such as its use of process hollowing techniques and thread injection/memory manipulation.

The window for detection is small but that does not mean the malware cannot be detected given the right approach to data collection and analysis; namely, the approach outlined by Chuvakin in his talk mentioned above.

Data deluge

The point Chuvakin made was that these log sources, while frequently running to tens of millions of events daily – perhaps more given a larger organisation, can be invaluable in tracing the steps of an attacker in an incident response scenario, and companies should embrace the ‘data deluge’ in order to meet the demands of responding to targeted, advanced attacks. Indeed, Chuvakin says that companies “should deploy more visibility tools; it’s likely you don’t have enough, even if you think you are drowning in data.”

This approach, while being wholly sound advice, is somewhat unattainable to all but the largest and best funded security teams. Many simply don’t have the resources to analyse and correlate data in their SIEMs to such a forensic level, and recognise that automation is only as good as the rulesets that drive it. In addition, recruiting and retaining staff who have an implicit understanding of the traces left by a ‘super hacker’, who may be nation-state backed, is incredibly difficult regardless of budget.

Indeed, successful attack detection takes these principles several steps further, to include:

  • Real-time analysis of the data deluge – 24*7/365:

Having eyes on the so called “data deluge” by security analysts who truly understand the hacker mind set and who can track and unpick advanced threats, is crucial in reducing the current “detection deficit” between time of compromise and detection (as highlighted as a key trend in this year’s Verizon DBIR).

  • A reliance on anomaly-based detection, rather than on signatures or threat intelligence:

Traditional defences rely heavily on signatures, whether anti-virus or IDS (Intrusion Detection). While there is still a place for these to flag the obvious, so called “super hackers” use custom attack tools, which are almost impossible to catch with signatures. Using in-depth log analysis and tracking higher level indicators such as TTPs (Tactics, Techniques and Procedures) based on deviations from known good baselines is much more effective at unearthing advanced threats.

  • The correlation of standard event logs with network traffic and endpoint threat detection forensics:

Correlation is key to reducing time to detection and an analyst needs to be able to pivot across data from all sources, adding in context and enrichment as they go. An event from the network needs to be quickly verified and explored using host and log based data to gain as much intelligence as possible about a potential intrusion.

This approach enables a business with a managed attack detection and response service to detect nation-state and advanced criminal ‘super hackers’ before they can gain a significant foothold on their networks; as such these organisations can detect and respond appropriately to advanced attacks before they develop into a significant breach.

Share This