Blog
/

Email

/
April 28, 2021

How AI Email Security Benefits Human Defenders

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
28
Apr 2021
Learn how autonomous AI frees up IT teams and allows them to focus on what matters. Say goodbye to weighed-down teams and lengthy security processes.

At the heart of any email attack is the goal of moving the recipient to engage: whether that’s clicking a link, filling in a form, or opening an attachment. And with over nine in ten cyber-attacks starting with an email, this attack vector continues to prove successful, despite organizations’ best efforts to safeguard their workforce by deploying email gateways and training employees to spot phishing attempts.

Email attackers have seen such success because they understand their victims. They know that, ultimately, human beings are creatures of habit, prone to error, and susceptible to their emotions. Years of experience has allowed attackers to fine tune their emails making them more plausible and more provocative. Automated tools are now increasing the speed and scale at which criminals can buy new domains and send emails en masse. This makes it even easier to ‘A/B test’ attack methods: abandoning those that don’t see high success rates and capitalizing on those that do.

We can classify phishing attempts into five broad categories, each aiming to trigger a different emotional reaction and elicit a response.

  • Fear: “We have detected a virus on your device, log in to your McAfee account.”
  • Curiosity: “You have 3 new voicemails, click here.”
  • Generosity: “COVID-19 has greatly impacted homelessness in your area. Donate now.”
  • Greed: “Only 23 iPhones left to give away, act now!”
  • Concern: “Coronavirus outbreak in your area: Find out more.”

It’s worth noting that today’s increasingly dynamic workforces are more susceptible to these techniques, isolated in their homes and hungry for new information.

Turning to tech

As email attacks continue to trick employees and find success, many organizations have realized that the built-in security tools that come with their email provider aren’t enough to defend against today’s attacks. Additional email gateways are successful in catching spam and other low-hanging fruit, but fail to stop advanced attacks – particularly those leveraging novel malware, new domains, or advanced techniques. These advanced attacks are also the most damaging to businesses.

This failure is due to an inherent weakness in the legacy approach of traditional security tools. They compare inbound mail against lists of ‘known bad’ IPs, domains, and file hashes. Senders and recipients are treated simply as data points – ignoring the nuances of the human beings behind the keyboards.

Looking at these metrics in isolation fails to take into account the full context that can only be gained by understanding the people behind email interactions: where they usually log in from, who they communicate with, how they write, and what types of attachments they send and receive. It is this rich, personal context that reveals seemingly benign emails to be unmistakably malicious, especially when other data fails to reveal the danger.

Misunderstanding the human

Frustrated with the ineffectiveness of traditional tools, many organizations think that the solution is to minimize the chances that employees engage with malicious emails through comprehensive employee training. Indeed, companies often attempt to train their employees to spot malicious emails to compensate for their technology’s lack of detection.

Considering humans to be the last line of defense is dangerous, and this approach overlooks the fact that today’s sophisticated fakes can appear indistinguishable to legitimate mails. It's only when you really break an email down beyond the text, beyond the personal name, beyond the domain and email address (in the case of compromised trusted senders), that you can decipher between real and fake.

Large data breaches of recent years have given attackers greater access than ever to corporate emails and stolen passwords, and so supply chain attacks are becoming increasingly common. When attackers take over a trusted account or an existing email thread, how can an employee be expected to notice a subtle change in wording or the different type of attached document? However rigorous the internal training program and regardless of how vigilant employees are, we are now at the point where humans cannot spot these very subtle indicators. And one click is all it takes.

Understanding the human

Email security, for a long time, remains an unsolved piece of the complex cyber security puzzle. The failure of both traditional tools and employee training has prompted organizations to take a radically different approach. Thousands of businesses across the world, in both the public and private sector, use artificial intelligence that understands the human behind the keyboard and forms a nuanced and continually evolving understanding of email interactions across the business.

By learning what a human does, who they interact with, how they write, and the substance of a typical conversation between any two or more people, AI begins to understand the habits of employees, and over time it builds a comprehensive picture of their normal patterns of behavior. Most importantly, AI is self-learning, continuously revising its understanding of ‘normal’ so that when employees’ habits change, so does the AI’s understanding.

This enables the technology to detect behavioral anomalies that fall outside of an employee’s ‘pattern of life’, or the pattern of life for the organization as a whole.

This fundamentally new approach to email security enables the system to recognize the subtle indicators of a threat and make accurate decisions to stop or allow emails to pass through, even if a threat has never been seen before.

Sitting behind email gateways, this self-learning technology has extremely high catch rates. It has caught countless malicious emails that other tools missed, from impersonations of senior financial personnel to ‘fearware’ that played on the workforce’s uncertainties at a time of pandemic.

Attackers are continuing to innovate, and automation has led to a new wave of email threats. 88% of security leaders now believe that cyber-attacks powered by offensive AI are inevitable. The email threat landscape is rapidly changing, and we can expect to receive more hoax emails that are more convincing. Now is a crucial moment for organizations to prepare for this eventuality by adopting AI in their email defenses.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Dan Fein
VP, Product

Based in New York, Dan joined Darktrace’s technical team in 2015, helping customers quickly achieve a complete and granular understanding of Darktrace’s product suite. Dan has a particular focus on Darktrace/Email, ensuring that it is effectively deployed in complex digital environments, and works closely with the development, marketing, sales, and technical teams. Dan holds a Bachelor’s degree in Computer Science from New York University.

Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

November 26, 2024

/
No items found.

Why artificial intelligence is the future of cybersecurity

Default blog imageDefault blog image

Introduction: AI & Cybersecurity

In the wake of artificial intelligence (AI) becoming more commonplace, it’s no surprise to see that threat actors are also adopting the use of AI in their attacks at an accelerated pace. AI enables augmentation of complex tasks such as spear-phishing, deep fakes, polymorphic malware generation, and advanced persistent threat (APT) campaigns, which significantly enhances the sophistication and scale of their operations. This has put security professionals in a reactive state, struggling to keep pace with the proliferation of threats.

As AI reshapes the future of cyber threats, defenders are also looking to integrate AI technologies into their security stack. Adopting AI-powered solutions in cybersecurity enables security teams to detect and respond to these advanced threats more quickly and accurately as well as automate traditionally manual and routine tasks. According to research done by Darktrace in the 2024 State of AI Cybersecurity Report improving threat detection, identifying exploitable vulnerabilities, and automating low level security tasks were the top three ways practitioners saw AI enhancing their security team’s capabilities [1], underscoring the wide-ranging capabilities of AI in cyber.  

In this blog, we will discuss how AI has impacted the threat landscape, the rise of generative AI and AI adoption in security tools, and the importance of using multiple types of AI in cybersecurity solutions for a holistic and proactive approach to keeping your organization safe.  

The impact of AI on the threat landscape

The integration of AI and cybersecurity has brought about significant advancements across industries. However, it also introduces new security risks that challenge traditional defenses.  Three major concerns with the misuse of AI being leveraged by adversaries are: (1) the increase of novel social engineering attacks that are harder to detect and able to bypass traditional security tools,  (2) the ease of access for less experienced threat actors to now deliver advanced attacks at speed and scale and (3) the attacking of AI itself, to include machine learning models, data corpuses and APIs or interfaces.

In the context of social engineering, AI can be used to create more convincing phishing emails, conduct advanced reconnaissance, and simulate human-like interactions to deceive victims more effectively. Generative AI tools, such as ChatGPT, are already being used by adversaries to craft these sophisticated phishing emails, which can more aptly mimic human semantics without spelling or grammatical error and include personal information pulled from internet sources such as social media profiles. And this can all be done at machine speed and scale. In fact, Darktrace researchers observed a 135% rise in ‘novel social engineering attacks’ across Darktrace / EMAIL customers in 2023, corresponding to the widespread adoption and use of ChatGPT [2].  

Furthermore, these sophisticated social engineering attacks are now able to circumvent traditional security tools. In between December 21, 2023, and July 5, 2024, Darktrace / EMAIL detected 17.8 million phishing emails across the fleet, with 62% of these phishing emails successfully bypassing Domain-based Message Authentication, Reporting, and Conformance (DMARC) verification checks [2].  

And while the proliferation of novel attacks fueled by AI is persisting, AI also lowers the barrier to entry for threat actors. Publicly available AI tools make it easy for adversaries to automate complex tasks that previously required advanced technical skills. Additionally, AI-driven platforms and phishing kits available on the dark web provide ready-made solutions, enabling even novice attackers to execute effective cyber campaigns with minimal effort.

The impact of adversarial use of AI on the ever-evolving threat landscape is important for organizations to understand as it fundamentally changes the way we must approach cybersecurity. However, while the intersection of cybersecurity and AI can have potentially negative implications, it is important to recognize that AI can also be used to help protect us.

A generation of generative AI in cybersecurity

When the topic of AI in cybersecurity comes up, it’s typically in reference to generative AI, which became popularized in 2023. While it does not solely encapsulate what AI cybersecurity is or what AI can do in this space, it’s important to understand what generative AI is and how it can be implemented to help organizations get ahead of today’s threats.  

Generative AI (e.g., ChatGPT or Microsoft Copilot) is a type of AI that creates new or original content. It has the capability to generate images, videos, or text based on information it learns from large datasets. These systems use advanced algorithms and deep learning techniques to understand patterns and structures within the data they are trained on, enabling them to generate outputs that are coherent, contextually relevant, and often indistinguishable from human-created content.

For security professionals, generative AI offers some valuable applications. Primarily, it’s used to transform complex security data into clear and concise summaries. By analyzing vast amounts of security logs, alerts, and technical data, it can contextualize critical information quickly and present findings in natural, comprehensible language. This makes it easier for security teams to understand critical information quickly and improves communication with non-technical stakeholders. Generative AI can also automate the creation of realistic simulations for training purposes, helping security teams prepare for various cyberattack scenarios and improve their response strategies.  

Despite its advantages, generative AI also has limitations that organizations must consider. One challenge is the potential for generating false positives, where benign activities are mistakenly flagged as threats, which can overwhelm security teams with unnecessary alerts. Moreover, implementing generative AI requires significant computational resources and expertise, which may be a barrier for some organizations. It can also be susceptible to prompt injection attacks and there are risks with intellectual property or sensitive data being leaked when using publicly available generative AI tools.  In fact, according to the MIT AI Risk Registry, there are potentially over 700 risks that need to be mitigated with the use of generative AI.

Generative AI impact on cyber attacks screenshot data sheet

For more information on generative AI's impact on the cyber threat landscape download the Darktrace Data Sheet

Beyond the Generative AI Glass Ceiling

Generative AI has a place in cybersecurity, but security professionals are starting to recognize that it’s not the only AI organizations should be using in their security tool kit. In fact, according to Darktrace’s State of AI Cybersecurity Report, “86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats.” As we look toward the future of AI in cybersecurity, it’s critical to understand that different types of AI have different strengths and use cases and choosing the technologies based on your organization’s specific needs is paramount.

There are a few types of AI used in cybersecurity that serve different functions. These include:

Supervised Machine Learning: Widely used in cybersecurity due to its ability to learn from labeled datasets. These datasets include historical threat intelligence and known attack patterns, allowing the model to recognize and predict similar threats in the future. For example, supervised machine learning can be applied to email filtering systems to identify and block phishing attempts by learning from past phishing emails. This is human-led training facilitating automation based on known information.  

Large Language Models (LLMs): Deep learning models trained on extensive datasets to understand and generate human-like text. LLMs can analyze vast amounts of text data, such as security logs, incident reports, and threat intelligence feeds, to identify patterns and anomalies that may indicate a cyber threat. They can also generate detailed and coherent reports on security incidents, summarizing complex data into understandable formats.

Natural Language Processing (NLP): Involves the application of computational techniques to process and understand human language. In cybersecurity, NLP can be used to analyze and interpret text-based data, such as emails, chat logs, and social media posts, to identify potential threats. For instance, NLP can help detect phishing attempts by analyzing the language used in emails for signs of deception.

Unsupervised Machine Learning: Continuously learns from raw, unstructured data without predefined labels. It is particularly useful in identifying new and unknown threats by detecting anomalies that deviate from normal behavior. In cybersecurity, unsupervised learning can be applied to network traffic analysis to identify unusual patterns that may indicate a cyberattack. It can also be used in endpoint detection and response (EDR) systems to uncover previously unknown malware by recognizing deviations from typical system behavior.

Types of AI in cybersecurity
Figure 1: Types of AI in cybersecurity

Employing multiple types of AI in cybersecurity is essential for creating a layered and adaptive defense strategy. Each type of AI, from supervised and unsupervised machine learning to large language models (LLMs) and natural language processing (NLP), brings distinct capabilities that address different aspects of cyber threats. Supervised learning excels at recognizing known threats, while unsupervised learning uncovers new anomalies. LLMs and NLP enhance the analysis of textual data for threat detection and response and aid in understanding and mitigating social engineering attacks. By integrating these diverse AI technologies, organizations can achieve a more holistic and resilient cybersecurity framework, capable of adapting to the ever-evolving threat landscape.

A Multi-Layered AI Approach with Darktrace

AI-powered security solutions are emerging as a crucial line of defense against an AI-powered threat landscape. In fact, “Most security stakeholders (71%) are confident that AI-powered security solutions will be better able to block AI-powered threats than traditional tools.” And 96% agree that AI-powered solutions will level up their organization’s defenses.  As organizations look to adopt these tools for cybersecurity, it’s imperative to understand how to evaluate AI vendors to find the right products as well as build trust with these AI-powered solutions.  

Darktrace, a leader in AI cybersecurity since 2013, emphasizes interpretability, explainability, and user control, ensuring that our AI is understandable, customizable and transparent. Darktrace’s approach to cyber defense is rooted in the belief that the right type of AI must be applied to the right use cases. Central to this approach is Self-Learning AI, which is crucial for identifying novel cyber threats that most other tools miss. This is complemented by various AI methods, including LLMs, generative AI, and supervised machine learning, to support the Self-Learning AI.  

Darktrace focuses on where AI can best augment the people in a security team and where it can be used responsibly to have the most positive impact on their work. With a combination of these AI techniques, applied to the right use cases, Darktrace enables organizations to tailor their AI defenses to unique risks, providing extended visibility across their entire digital estates with the Darktrace ActiveAI Security Platform™.

Credit to: Ed Metcalf, Senior Director Product Marketing, AI & Innovations - Nicole Carignan VP of Strategic Cyber AI for their contribution to this blog.

CISOs guide to buying AI white paper cover

To learn more about Darktrace and AI in cybersecurity download the CISO’s Guide to Cyber AI here.

Download the white paper to learn how buyers should approach purchasing AI-based solutions. It includes:

  • Key steps for selecting AI cybersecurity tools
  • Questions to ask and responses to expect from vendors
  • Understand tools available and find the right fit
  • Ensure AI investments align with security goals and needs
Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, Attack Surface Management

Blog

/

November 19, 2024

/
No items found.

Darktrace Leading the Future of Network Detection and Response with Recognition from KuppingerCole

Default blog imageDefault blog image

KuppingerCole has recognized Darktrace as an overall Leader, Product Leader, Market Leader and Innovation Leader in the KuppingerCole Leadership Compass: Network Detection and Response (2024).

With the perimeter all but dissolved, Network Detection and Response (NDR) tools are quickly becoming a critical component of the security stack, as the main tool to span the modern network. NDRs connect on-premises infrastructure to cloud, remote workers, identities, SaaS applications, and IoT/OT – something not available to EDR that requires agents and isolates visibility to individual devices.

KuppingerCole Analysts AG designated Darktrace an ‘Overall Leader’ position because of our continual innovation around user-led security. Self-Learning AI together with automated triage through Cyber AI Analyst and real-time autonomous response actions have been instrumental to security teams in stopping potential threats before they become a breach. With this time saved, Darktrace is leading beyond reactive security to truly harden a network, allowing the team to spend more time in preventive security measures.

Network Detection and Response protects where others fail to reach

NDR solutions operate at the network level, deploying inside or parallel to your network to ingest raw traffic via virtual or physical sensors. This gives them unprecedented potential to identify anomalies and possible breaches in any network - far beyond simple on-prem, into dynamic virtual environments, cloud or hybrid networks, cloud applications, and even remote devices accessing the corporate network via ZTNA or VPN.

Rather than looking at processes level data, NDR can detect the lateral movement of an adversary across multiple assets by analyzing network traffic patterns which endpoint solutions may not be able to identify [1]. In the face of a growing, complex environment, organizations large and small, will benefit from using NDRs either in conjunction, or as the foundation for, their Extended Detection and Response (XDR) for a unified view that improves their overall threat detection, ease of investigation and faster response times.

Today's NDR solutions are expected to include advanced ML and artificial intelligence (AI) algorithms [1]

Traditional IDS & IPS systems are labor intensive, requiring continuous rule creation, outdated signature maintenance, and manual monitoring for false positives or incorrect actions. This is no longer viable against a higher volume and changing landscape, making NDR the natural network tool to level against these evolutions. The role of AI in NDRs is designed to meet this challenge, “to reduce both the labor need for analysis and false positives, as well as add value by improving anomaly detection and overall security posture” .

Celebrating success in leadership and innovation

Darktrace is proud to have been recognized as an NDR “Overall Leader” in KuppingerCole Analyst AG’s Leadership Compass. The report gave further recognition to Darktrace as a ‘Product Leader”, “Innovation Leader” and “Market Leader”.

Maximum scores were received for core product categories, in addition to market presence and financial strength. Particular attention was directed to our innovation. This year has seen several NDR updates via Darktrace’s ActiveAI Security Platform version 6.2 which has enhanced investigation workflows and provided new AI transparency within the toolset.

Positive scores were also received for Darktrace’s deployment ecosystem and surrounding support, minimizing the need for extraneous integrations through a unique platform architecture that connects with over 90 other vendors.

High Scores received in Darktrace’s KuppingerCole Spider Chart across Core NDR capability areas
Figure 1: High Scores received in Darktrace’s KuppingerCole Spider Chart across Core NDR capability areas

Darktrace’s pioneering AI approach sets it apart

Darktrace / NETWORK’s approach is fundamentally different to other NDRs. Continual anomaly-based detection (our Self-Learning AI), understands what is normal across each of your network entities, and then examines deviations from these behaviors rather than needing to apply static rules or ML to adversary techniques. As a result, Darktrace / NETWORK can focus on surfacing the novel threats that cannot be anticipated, whilst our proactive solutions expose gaps that can be exploited and reduce the risk of known threats.    

Across the millions of possible network events that may occur, Darktrace’s Cyber AI Analyst reduces that manual workload for SOC teams by presenting only what is most important in complete collated incidents. This accelerates SOC Level 2 analyses of incidents by 10x2, giving time back, first for any necessary response and then for preventive workflows.

Finally, when incidents begin to escalate, Darktrace can natively (or via third-party) autonomously respond and take precise actions based on a contextual understanding of both the affected assets and incident in question so that threats can be disarmed without impacting wider operations.

Within the KuppingerCole report, several standout strengths were listed:

  • Cyber AI Analyst was celebrated as a core differentiator, enhancing both visibility and investigation into critical network issues and allowing a faster response.
  • Darktrace / NETWORK was singled for its user benefits. Both a clear interface for analysts with advanced filtering and analytical tools, and efficient role-based access control (RBAC) and configuration options for administrators.
  • At the product level, Darktrace was recognized for complete network traffic analysis (NTA) capabilities allowing extensive analysis into components like application use/type, fingerprinting, source/destination communication, in addition to comprehensive protocol support across a range of network device types from IT, OT, IoT and mobiles and detailed MITRE ATT&CK mapping.
  • Finally, at the heart of it, Darktrace’s innovation was highlighted in relation to its intrinsic Self Learning AI, utilizing multiple layers of deep learning, neural networks, LLMs, NLP, Generative AI and more to understand network activity and filter it for what’s critical on an individual customer level.

Going beyond reactive security

Darktrace’s visibility and AI-enabled detection, investigation and response enable security teams to focus on hardening gaps in their network through contextual relevance & priority. Darktrace / NETWORK explicitly gives time back to security teams allowing them to focus on the bigger strategic and governance workflows that sometimes get overlooked. This is enabled through proactive solutions intrinsically connected to our NDR:

  • Darktrace / Proactive Exposure Management, which looks beyond just CVE risks to instead discover, prioritize and validate risks by business impact and how to mobilize against them early, to reduce the number of real threats security teams face.
  • Darktrace / Incident Readiness & Recovery, a solution rather than service-based approach to incident response (IR) that lets teams respond in the best way to each incident and proactively test their familiarity and effectiveness of IR workflows with sophisticated incident simulations involving their own analysts and assets.

Together, these solutions allow Darktrace / NETWORK to go beyond the traditional NDR and shift teams to a more hardened and proactive state.

Putting customers first

Customers continue to sit at the forefront of Darktrace R&D, with their emerging needs and pain points being the direct inspiration for our continued innovation.

This year Darktrace / NETWORK has protected thousands of customers against the latest attacks, from data exfil and destruction, to unapproved privilege escalation and ransomware including strains like Medusa, Qilin and AlphV BlackCat.

In each instance, Darktrace / NETWORK was able to provide a holistic lens of the anomalies present in their traffic, collated those that were important, and either responded or gave teams the ability to take targeted actions against their threats – even when adversaries pivoted. In one example of a Gootloader compromise, Darktrace ensured a SOC went from detection to recovery within 5 days, 92.8% faster than the average containment time of 69 days.

Results like these, focused on user-led security, have secured Darktrace’s position within the latest NDR Leadership Compass.

To find out more about what makes Darktrace / NETWORK special, read the full KuppingerCole report.

References

[1] Osman Celik, KuppingerCole Leadership Compass:Network Detection and Response (2024)

[2] Darktrace's AI Analyst customer fleet data

[3] https://www.ibm.com/reports/data-breach

Continue reading
About the author
Gabriel Few-Wiegratz
Product Marketing Manager
Your data. Our AI.
Elevate your network security with Darktrace AI