Blog
/

Inside the SOC

/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023
Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Charlotte Thompson
Cyber Analyst
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

January 2, 2025

/

Inside the SOC

A Snake in the Net: Defending Against AiTM Phishing Threats and Mamba 2FA

Default blog imageDefault blog image

What are Adversary-in-the-Middle (AiTM) phishing kits?

Phishing-as-a-Service (PhaaS) platforms have significantly lowered the barriers to entry for cybercriminals, enabling a new wave of sophisticated phishing attacks. Among the most concerning developments in this landscape is the emergence of Adversary-in-the-Middle (AiTM) phishing kits, which enhance traditional phishing tactics by allowing attackers to intercept and manipulate communications in real-time. The PhaaS marketplace offers a wide variety of innovative capabilities, with basic services starting around USD 120 and more advanced services costing around USD 250 monthly [1].

These AiTM kits are designed to create convincing decoy pages that mimic legitimate login interfaces, often pre-filling user information to increase credibility. By acting as a man-in-the-middle, attackers can harvest sensitive data such as usernames, passwords, and even multi-factor authentication (MFA) tokens without raising immediate suspicion. This capability not only makes AiTM attacks more effective but also poses a significant challenge for cybersecurity defenses [2].

Mamba 2FA is one such example of a PhaaS strain with AiTM capabilities that has emerged as a significant threat to users of Microsoft 365 and other enterprise systems. Discovered in May 2024, Mamba 2FA employs advanced AiTM tactics to bypass MFA, making it particularly dangerous for organizations relying on these security measures.

What is Mamba 2FA?

Phishing Mechanism

Mamba 2FA employs highly convincing phishing pages that closely mimic legitimate Microsoft services like OneDrive and SharePoint. These phishing URLs are crafted with a specific structure, incorporating Base64-encoded parameters. This technique allows attackers to tailor the phishing experience to the targeted organization, making the deception more effective. If an invalid parameter is detected, users are redirected to a benign error page, which helps evade automated detection systems [5].

Figure 1: Phishing page mimicking the Microsoft OneDrive service.

Real-Time Communication

A standout feature of Mamba 2FA is its use of the Socket.IO JavaScript library. This library facilitates real-time communication between the phishing page and the attackers' backend servers. As users input sensitive information, such as usernames, passwords, and MFA tokens on the phishing site, this data is immediately relayed to the attackers, enabling swift unauthorized access [5].

Multi-Factor Authentication Bypass

Mamba 2FA specifically targets MFA methods that are not resistant to phishing, such as one-time passwords (OTPs) and push notifications. When a user enters their MFA token, it is captured in real-time by the attackers, who can then use it to access the victim's account immediately. This capability significantly undermines traditional security measures that rely on MFA for account protection.

Infrastructure and Distribution

The platform's infrastructure consists of two main components: link domains and relay servers. Link domains handle initial phishing attempts, while relay servers are responsible for stealing credentials and completing login processes on behalf of the attacker. The relay servers are designed to mask their IP addresses by using proxy services, making it more difficult for security systems to block them [3].

Evasion Techniques

To evade detection by security tools, Mamba 2FA employs several strategies:

  • Sandbox Detection: The platform can detect if it is being analyzed in a sandbox environment and will redirect users to harmless pages like Google’s 404 error page.
  • Dynamic URL Generation: The URLs used in phishing attempts are frequently rotated and often short-lived to avoid being blacklisted by security solutions.
  • HTML Attachments: Phishing emails often include HTML attachments that appear benign but contain hidden JavaScript that redirects users to the phishing page [5].

Darktrace’s Coverage of Mamba 2FA

Starting in July 2024, the Darktrace Threat Research team detected a sudden rise in Microsoft 365 customer accounts logging in from unusual external sources. These accounts were accessed from an anomalous endpoint, 2607:5500:3000:fea[::]2, and exhibited unusual behaviors upon logging into Software-as-a-Service (SaaS) accounts. This activity strongly correlates with a phishing campaign using Mamba 2FA, first documented in late June 2024 and tracked as Mamba 2FA by Sekoia [2][3].

Darktrace / IDENTITY  was able to identify the initial stages of the Mamba 2FA campaign by correlating subtle anomalies, such as unusual SaaS login locations. Using AI based on peer group analysis, it detected unusual behavior associated with these attacks. By leveraging Autonomous Response actions, Darktrace was able to neutralize these threats in every instance of the campaign detected.

On July 23, a SaaS user was observed logging in from a rare ASN and IP address, 2607:5500:3000:fea::2, originating from the US and successfully passed through MFA authentication.

Figure 2: Model Alert Event Log showing Darktrace’s detection of a SaaS user mailbox logging in from an unusual source it correlates with Mamba 2FA relay server.

Almost an hour later, the SaaS user was observed logging in from another suspicious IP address, 45.133.172[.]86, linked to ASN AS174 COGENT-174. This IP, originating from the UK, successfully passed through MFA validation.

Following this unusual access, the SaaS user was notably observed reading emails and files that could contain sensitive payment and contract information. This behavior suggests that the attacker may have been leveraging contextual information about the target to craft further malicious phishing emails or fraudulent invoices. Subsequently, the user was detected creating a new mailbox rule titled 'fdsdf'. This rule was configured to redirect emails from a specific domain to the 'Deleted Items' folder and automatically mark them as read.

Implications of Unusual Email Rules

Such unusual email rule configurations are a common tactic employed by attackers. They often use these rules to automatically forward emails containing sensitive keywords—such as "invoice”, "payment", or "confidential"—to an external address. Additionally, these rules help conceal malicious activities, keeping them hidden from the target and allowing the attacker to operate undetected.

Figure 3: The model alert “SaaS / Compliance / Anomalous New Email Rule,” pertaining to the unusual email rule created by the SaaS user named ‘fdsdf’.

Blocking the action

A few minutes later, the SaaS user from the unusual IP address 45.133.172[.]86 was observed attempting to send an email with the subject “RE: Payments.” Subsequently, Darktrace detected the user engaging in activities that could potentially establish persistence in the compromised account, such as registering a new authenticator app. Recognizing this sequence of anomalous behaviors, Darktrace implemented an Autonomous Response inhibitor, disabling the SaaS user for two hours. This action effectively contained potential malicious activities, such as the distribution of phishing emails and fraudulent invoices, and gave the customer’s security team the necessary time to conduct a thorough investigation and implement appropriate security measures.

Figure 4: Device Event Log displaying Darktrace’s Autonomous Response taking action by blocking the SaaS account.
Figure 5: Darktrace / IDENTITY highlighting the 16 model alerts that triggered during the observed compromise.

In another example from mid-July, similar activities related to the campaign were observed on another customer network. A SaaS user was initially detected logging in from the unusual external endpoint 2607:5500:3000:fea[::]2.

Figure 6: The SaaS / Compromise / SaaS Anomaly Following Anomalous Login model alert was triggered by an unusual login from a suspicious IP address linked to Mamba 2FA.

A few minutes later, in the same manner as demonstrated in the previous case, the actor was observed logging in from another rare endpoint, 102.68.111[.]240. However, this time it was from a source IP located in Lagos, Nigeria, which no other user on the network had been observed connecting from. Once logged in, the SaaS user updated the settings to "User registered Authenticator App with Notification and Code," a possible attempt to maintain persistence in the SaaS account.

Figure 7: Darktrace / IDENTITY highlighted the regular locations for the SaaS user. The rarity scores associated with the Mamba 2FA IP location and another IP located in Nigeria were classified as having very low regularity scores for this user.

Based on unusual patterns of user behavior, a Cyber AI Analyst Incident was also generated, detailing all potential account hijacking activities. Darktrace also applied an Autonomous Response action, disabling the user for over five hours. This swift action was crucial in preventing further unauthorized access, potential data breaches and further implications.

Figure 8: Cyber AI Analyst Incident detailing the unusual activities related to the SaaS account hijacking.

Since the customer had subscribed to Darktrace Security Operations Centre (SOC) services, Darktrace analysts conducted an additional human investigation confirming the account compromise.

How Darktrace Combats Phishing Threats

The initial entry point for Mamba 2FA account compromises primarily involves phishing campaigns using HTML attachments and deceptive links. These phishing attempts are designed to mimic legitimate Microsoft services, such as OneDrive and SharePoint, making them appear authentic to unsuspecting users. Darktrace / EMAIL leverages multiple capabilities to analyze email content for known indicators of phishing. This includes looking for suspicious URLs, unusual attachments (like HTML files with embedded JavaScript), and signs of social engineering tactics commonly used in phishing campaigns like Mamba 2FA. With these capabilities, Darktrace successfully detected Mamba 2FA phishing emails in networks where this tool is integrated into the security layers, consequently preventing further implications and account hijacks of their users.

Mamba 2FA URL Structure and Domain Names

The URL structure used in Mamba 2FA phishing attempts is specifically designed to facilitate the capture of user credentials and MFA tokens while evading detection. These phishing URLs typically follow a pattern that incorporates Base64-encoded parameters, which play a crucial role in the operation of the phishing kit.

The URLs associated with Mamba 2FA phishing pages generally follow this structure [6]:

https://{domain}/{m,n,o}/?{Base64 string}

Below are some potential Mamba 2FA phishing emails, with the Base64 strings already decoded, that were classified as certain threats by Darktrace / EMAIL. This classification was based on identifying multiple suspicious characteristics, such as HTML attachments containing JavaScript code, emails from senders with no previous association with the recipients, analysis of redirect links, among others. These emails were autonomously blocked from being delivered to users' inboxes.

Figure 9: Darktrace / EMAIL highlighted a possible phishing email from Mamba 2FA, which was classified as a 100% anomaly.
Figure 10: Darktrace / EMAIL highlighted a URL that resembles the characteristics associated with Mamba 2FA.

Conclusion

The rise of PhaaS platforms and the advent of AiTM phishing kits represent a concerning evolution in cyber threats, pushing the boundaries of traditional phishing tactics and exposing significant vulnerabilities in current cybersecurity defenses. The ability of these attacks to effortlessly bypass traditional security measures like MFA underscores the need for more sophisticated, adaptive strategies to combat these evolving threats.

By identifying and responding to anomalous activities within Microsoft 365 accounts, Darktrace not only highlights the importance of comprehensive monitoring but also sets a new standard for proactive threat detection. Furthermore, the autonomous threat response capabilities and the exceptional proficiency of Darktrace / EMAIL in intercepting and neutralizing sophisticated phishing attacks illustrate a robust defense mechanism that can effectively safeguard users and maintain the integrity of digital ecosystems.

Credit to Patrick Anjos (Senior Cyber Analyst) and Nahisha Nobregas (Senior Cyber Analyst)

Appendices

Darktrace Model Detections

  • SaaS / Access / M365 High Risk Level Login
  • SaaS / Access / Unusual External Source for SaaS Credential Use
  • SaaS / Compromise / Login From Rare Endpoint While User Is Active
  • SaaS / Compliance / M365 Security Information Modified
  • SaaS / Compromise / Unusual Login and New Email Rule
  • SaaS / Email Nexus / Suspicious Internal Exchange Activity
  • SaaS / Compliance / Anomalous New Email Rule
  • SaaS / Email Nexus / Possible Outbound Email Spam
  • SaaS / Compromise / Unusual Login and Account Update
  • SaaS / Compromise / SaaS Anomaly Following Anomalous Login
  • SaaS / Compliance / M365 Security Information Modified
  • SaaS / Compromise / Login From Rare Endpoint While User Is Active
  • SaaS / Compromise / Unusual Login, Sent Mail, Deleted Sent
  • SaaS / Unusual Activity / Multiple Unusual SaaS Activities
  • SaaS / Email Nexus / Unusual Login Location Following Link to File Storage
  • SaaS / Unusual Activity / Multiple Unusual External Sources For SaaS Credential
  • IaaS / Compliance / Uncommon Azure External User Invite
  • SaaS / Compliance / M365 External User Added to Group
  • SaaS / Access / M365 High Risk Level Login
  • SaaS / Compliance / M365 Security Information Modified
  • SaaS/ Unusual Activity / Unusual MFA Auth and SaaS Activity
  • SaaS / Compromise / Unusual Login and Account Update

Cyber AI Analyst Incidents:

  • Possible Hijack of Office365 Account
  • Possible Hijack of AzureActiveDirectory Account
  • Possible Unsecured Office365 Resource

List of Indicators of Compromise (IoCs)

IoC       Type    Description + Confidence

2607:5500:3000:fea[::]2 - IPv6 - Possible Mamba 2FA relay server

2607:5500:3000:1cab:[:]2 - IPv6 - Possible Mamba 2FA relay server

References

1.     https://securityaffairs.com/136953/cyber-crime/caffeine-phishing-platform.html

2.     https://any.run/cybersecurity-blog/analysis-of-the-phishing-campaign/

3.     https://www.bleepingcomputer.com/news/security/new-mamba-2fa-bypass-service-targets-microsoft-365-accounts/

4.     https://cyberinsider.com/microsoft-365-accounts-targeted-by-new-mamba-2fa-aitm-phishing-threat/

5.     https://blog.sekoia.io/mamba-2fa-a-new-contender-in-the-aitm-phishing-ecosystem/

MITRE ATT&CK Mapping

Tactic – Technique

DEFENSE EVASION, PERSISTENCE, PRIVILEGE ESCALATION, INITIAL ACCESS - Cloud Accounts

DISCOVERY - Cloud Service Dashboard

RESOURCE DEVELOPMENT - Compromise Accounts

CREDENTIAL ACCESS - Steal Web Session Cookie

PERSISTENCE - Account Manipulation

PERSISTENCE - Outlook Rules

RESOURCE DEVELOPMENT - Email Accounts

INITIAL ACCESS - Phishing

Continue reading
About the author
Patrick Anjos
Senior Cyber Analyst

Blog

/

December 19, 2024

/
No items found.

Darktrace Recognized in the Gartner® Magic Quadrant™ for Email Security Platforms

Default blog imageDefault blog image

Darktrace has been recognized in the first ever Gartner Magic Quadrant for Email Security Platforms (ESP).  As a Challenger, we have been recognized based on our Ability to Execute and Completeness of Vision.

The Gartner Magic Quadrant for Email Security is designed to help organizations evaluate which email security solutions might be the best fit for their needs by providing a visual representation of the market vendors and the strengths and cautions of different vendors. We encourage our customers to read the full report to get the complete picture.

Darktrace / EMAIL has a unique AI approach to identifying threats, including NLP and behavioral analysis, instead of traditional security measures like signatures and sandboxing – providing protection against advanced attacks like Business Email Compromise (BEC) and spear phishing. We believe our AI-first approach delivers high-quality solutions that our customers trust, allowing them to stay ahead of sophisticated threats that other tools miss.  

We’re proud of Darktrace’s rapid growth, geographic scale, and ability to execute effectively in the email security market, which reflect our commitment to delivering high-quality, reliable solutions that meet the evolving needs of our customers.

What do we believe makes Darktrace the fastest growing email security solution on the market?

An AI-first approach to innovation: Catching the threats others miss

As one of the founders of the ICES category, Darktrace has a long history of innovation, backed by over 200 patents. While other email security solutions are only just starting to apply machine learning (ML) techniques to outdated methods like signature analysis, reputation lists, and sandboxing, Darktrace has redefined the approach to email threat detection with its pioneering AI-driven anomaly detection engine.

Traditional ESPs often miss advanced threats because they rely on rules and signatures that focus on payloads and blindly trust known sources. This approach requires constant updates and frequently fails to detect threats like Business Email Compromise and Spear Phishing. In contrast, Darktrace / EMAIL uses advanced anomaly detection to identify the most sophisticated threats by focusing on unusual patterns and behaviors. This innovative approach has consistently delivered superior detection, stopping on average 58% of the threats that other solutions in the security stack miss.1

But our AI-first approach doesn’t stop at the inbox. At Darktrace, we transcend the limitations of traditional email security by leveraging a platform that unifies insights across multiple domains, providing robust protection against multi-domain threats. Our award-winning solutions defend the most popular attack vectors, including email, messaging, network, and identity protection. By combining signals from all domains, we establish unique behavioral profiles for each device and user, significantly enhancing detection precision.  

This pioneering approach has led to introducing industry-first advancements like QR code analysis and automated incident investigations, alongside game-changing functionality including:

  • Microsoft Teams security with advanced messaging analysis: The ability to identify critical early phishing and insider threats across both email and Microsoft Teams messaging.  
  • AI analyst narratives for improved end user reporting: that reduces phishing investigations by 60% by exposing unique narratives that provide the context of each received email and give feedback to each employee as they interact with their mail.2
  • Mailbox Security Assistant: to perform advanced behavioral browser analysis and stop malicious links within webpages, detecting and remediating 70% more malicious phishing links than traditional tools.3  
  • AI based, autonomous data loss prevention: to immediately secure your organization from misdirected emails, insider threats, and data loss—both classified and unclassified- without any administrative overhead.

Customer trust that fuels exponential growth

With almost 5,000 customers in under 5 years, we've doubled the growth rate of other vendors in the email security market. Our rapid market penetration, fueled by customer satisfaction and pioneering technology, showcases our revolutionary approach and sets new industry standards. 

Darktrace’s exceptional customer retention is fueled by an unparalleled customer experience, extensive regional support, dedicated account teams, and cutting-edge scalable technology. We pride ourselves on having a global network with local expertise, consisting of 110 worldwide offices which provide local language and technical support to offer multilingual, in-house assistance to our customer base.

Check it out – Darktrace / EMAIL has the highest percentage of 5-star ratings with a 4.8 rating on Gartner® Peer Insights™.4

Supporting every stage of your email security journey

Darktrace / EMAIL supports your security maturity journey, from first time security buyers to mature security stacks looking to augment their existing ESPs – by handling advanced threats without extensive tuning. And unlike other solutions that create a siloed and parallel solution, it works harmoniously with native email providers to create a modern email security stack. That’s why Darktrace performs well with first-time email security buyers and has strong renewal rates.

Integrating with Microsoft and Google via API, we replace traditional Secure Email Gateways (SEGs) with a modern, comprehensive email security stack. By combining approaches, our solution merges attack-centric analysis, which learns attack patterns and threat intelligence, with a business-centric approach that understands user behavior and inbox activity to deliver a unified stack that defends the entire threat spectrum – leading Darktrace to be recognized as Microsoft Partner of the year UK 2024.  

Our user-friendly, self-learning AI solution requires minimal tuning and deployment, making it perfect for customers looking for a highly usable but lightly configurable solution that will accompany them throughout their lifetime as they mature their email security stack in line with the evolving threat landscape.

Learn more

Get complimentary access to the full Gartner® Magic Quadrant™ for Email Security Platforms here.

To learn more about Darktrace / EMAIL or to get a free demo, check out the product hub.

References

1 From September 1 – December 31 2023, 58% of the phishing emails analyzed by Darktrace / EMAIL had already passed through native spam filtering and email security controls. (Darktrace End of Year Threat Report 2023)

2 When customers deployed the Darktrace / EMAIL Outlook Add-in there was a 60% decrease in incorrectly reported phishing emails. Darktrace Internal Research, 2024

3 Once a user reports phishing that contains a link, an automated second level triage engages our link analysis infrastructure expanding the signals analyzed. Darktrace Internal Research, 2024

4 Based on 252 reviews as of 19th December 2024

Continue reading
About the author
Carlos Gray
Product Manager
Your data. Our AI.
Elevate your network security with Darktrace AI