Research

Identifying compromises with graphs of high-precision security metadata

Using directed graphs of lower-level security events to reliably discover complex security incidents.

Download this research paper

Security alerts are frequently oriented around a single entity or device, identifying the specific suspicious activity observed and its source. However, major compromises are rarely limited to a single entity, and instead spread through the network, exhibiting a wide variety of techniques and possible indicators of compromise.

Piecing together this activity to identify its scope and any additional properties, such as a possible patient zero or any C2 endpoints, is typically the work of a human analyst. Various tools can represent specific events — such as network connections, SaaS actions, or DNS queries — in a graph format, which can help with this analysis, but it remains a difficult task to analyze these low-level events and determine which are likely to be associated with the compromise.

However, systems such as AI Analyst produce more precise, higher-level alerts for security events. For instance, rather than identifying that a device connected to three suspicious endpoints 27 times and data was transferred, AI analyst might produce a single exfiltration event highlighting the source device and all destinations.

If these higher-level insights are represented on a directed graph, then each edge is significantly more succinct and can be treated with much higher confidence. Possible compromises can then be identified through analysis of this graph, with key properties such as patient zeros and C2 domains being straightforward to read from a subgraph associated with the compromise.

This allows for the automated detection of compromises, and the automated determination of their full scope, without human attention.

AI Research Centre

Backed in Research.

In existence since Darktrace’s inception in 2013, the Darktrace AI Research Centre is foundational to our continued innovation. Rather than a defined product roadmap, the Centre looks at how AI can be applied to real-world challenges, to find solutions that cannot be achieved by humans alone.