Learn how autonomous AI frees up IT teams and allows them to focus on what matters. Say goodbye to weighed-down teams and lengthy security processes.
At the heart of any email attack is the goal of moving the recipient to engage: whether that’s clicking a link, filling in a form, or opening an attachment. And with over nine in ten cyber-attacks starting with an email, this attack vector continues to prove successful, despite organizations’ best efforts to safeguard their workforce by deploying email gateways and training employees to spot phishing attempts.
Email attackers have seen such success because they understand their victims. They know that, ultimately, human beings are creatures of habit, prone to error, and susceptible to their emotions. Years of experience has allowed attackers to fine tune their emails making them more plausible and more provocative. Automated tools are now increasing the speed and scale at which criminals can buy new domains and send emails en masse. This makes it even easier to ‘A/B test’ attack methods: abandoning those that don’t see high success rates and capitalizing on those that do.
We can classify phishing attempts into five broad categories, each aiming to trigger a different emotional reaction and elicit a response.
Fear: “We have detected a virus on your device, log in to your McAfee account.”
Curiosity: “You have 3 new voicemails, click here.”
Generosity: “COVID-19 has greatly impacted homelessness in your area. Donate now.”
Greed: “Only 23 iPhones left to give away, act now!”
Concern: “Coronavirus outbreak in your area: Find out more.”
It’s worth noting that today’s increasingly dynamic workforces are more susceptible to these techniques, isolated in their homes and hungry for new information.
Turning to tech
As email attacks continue to trick employees and find success, many organizations have realized that the built-in security tools that come with their email provider aren’t enough to defend against today’s attacks. Additional email gateways are successful in catching spam and other low-hanging fruit, but fail to stop advanced attacks – particularly those leveraging novel malware, new domains, or advanced techniques. These advanced attacks are also the most damaging to businesses.
This failure is due to an inherent weakness in the legacy approach of traditional security tools. They compare inbound mail against lists of ‘known bad’ IPs, domains, and file hashes. Senders and recipients are treated simply as data points – ignoring the nuances of the human beings behind the keyboards.
Looking at these metrics in isolation fails to take into account the full context that can only be gained by understanding the people behind email interactions: where they usually log in from, who they communicate with, how they write, and what types of attachments they send and receive. It is this rich, personal context that reveals seemingly benign emails to be unmistakably malicious, especially when other data fails to reveal the danger.
Misunderstanding the human
Frustrated with the ineffectiveness of traditional tools, many organizations think that the solution is to minimize the chances that employees engage with malicious emails through comprehensive employee training. Indeed, companies often attempt to train their employees to spot malicious emails to compensate for their technology’s lack of detection.
Considering humans to be the last line of defense is dangerous, and this approach overlooks the fact that today’s sophisticated fakes can appear indistinguishable to legitimate mails. It's only when you really break an email down beyond the text, beyond the personal name, beyond the domain and email address (in the case of compromised trusted senders), that you can decipher between real and fake.
Large data breaches of recent years have given attackers greater access than ever to corporate emails and stolen passwords, and so supply chain attacks are becoming increasingly common. When attackers take over a trusted account or an existing email thread, how can an employee be expected to notice a subtle change in wording or the different type of attached document? However rigorous the internal training program and regardless of how vigilant employees are, we are now at the point where humans cannot spot these very subtle indicators. And one click is all it takes.
Understanding the human
Email security, for a long time, remains an unsolved piece of the complex cyber security puzzle. The failure of both traditional tools and employee training has prompted organizations to take a radically different approach. Thousands of businesses across the world, in both the public and private sector, use artificial intelligence that understands the human behind the keyboard and forms a nuanced and continually evolving understanding of email interactions across the business.
By learning what a human does, who they interact with, how they write, and the substance of a typical conversation between any two or more people, AI begins to understand the habits of employees, and over time it builds a comprehensive picture of their normal patterns of behavior. Most importantly, AI is self-learning, continuously revising its understanding of ‘normal’ so that when employees’ habits change, so does the AI’s understanding.
This enables the technology to detect behavioral anomalies that fall outside of an employee’s ‘pattern of life’, or the pattern of life for the organization as a whole.
This fundamentally new approach to email security enables the system to recognize the subtle indicators of a threat and make accurate decisions to stop or allow emails to pass through, even if a threat has never been seen before.
Sitting behind email gateways, this self-learning technology has extremely high catch rates. It has caught countless malicious emails that other tools missed, from impersonations of senior financial personnel to ‘fearware’ that played on the workforce’s uncertainties at a time of pandemic.
Attackers are continuing to innovate, and automation has led to a new wave of email threats. 88% of security leaders now believe that cyber-attacks powered by offensive AI are inevitable. The email threat landscape is rapidly changing, and we can expect to receive more hoax emails that are more convincing. Now is a crucial moment for organizations to prepare for this eventuality by adopting AI in their email defenses.
Like this and want more?
Receive the latest blog in your inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Newsletter
Stay ahead of threats with the Darktrace blog newsletter
Get the latest insights from the cybersecurity landscape, including threat trends, incident analysis, and the latest Darktrace product developments – delivered directly to your inbox, monthly.
Thanks, your request has been received
A member of our team will be in touch with you shortly.
Oops! Something went wrong while submitting the form.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Dan Fein
VP, Product
Based in New York, Dan joined Darktrace’s technical team in 2015, helping customers quickly achieve a complete and granular understanding of Darktrace’s product suite. Dan has a particular focus on Darktrace/Email, ensuring that it is effectively deployed in complex digital environments, and works closely with the development, marketing, sales, and technical teams. Dan holds a Bachelor’s degree in Computer Science from New York University.
Navigating buying and adoption journeys for AI cybersecurity tools
Enterprise AI tools go mainstream
In this dawning Age of AI, CISOs are increasingly exploring investments in AI security tools to enhance their organizations’ capabilities. AI can help achieve productivity gains by saving time and resources, mining intelligence and insights from valuable data, and increasing knowledge sharing and collaboration.
While investing in AI can bring immense benefits to your organization, first-time buyers of AI cybersecurity solutions may not know where to start. They will have to determine the type of tool they want, know the options available, and evaluate vendors. Research and understanding are critical to ensure purchases are worth the investment.
Challenges of a muddied marketplace
Key challenges in AI purchasing come from consumer doubt and lack of vendor transparency. The AI software market is buzzing with hype and flashy promises, which are not necessarily going to be realized immediately. This has fostered uncertainty among potential buyers, especially in the AI cybersecurity space.
As Gartner writes, “There is a general lack of transparency and understanding about how AI-enhanced security solutions leverage AI and the effectiveness of those solutions within real-world SecOps. This leads to trust issues among security leaders and practitioners, resulting in slower adoption of AI features” [1].
Given this widespread uncertainty generated through vague hype, buyers must take extra care when considering new AI tools to adopt.
Goals of AI adoption
Buyers should always start their journeys with objectives in mind, and a universal goal is to achieve return on investment. When organizations adopt AI, there are key aspects that will signal strong payoff. These include:
Wide-ranging application across operations and areas of the business
Actual, enthusiastic adoption and application by the human security team
Integration with the rest of the security stack and existing workflows
Business and operational benefits, including but not limited to:
Reduced risk
Reduced time to response
Reduced potential downtime, damage, and disruption
Increased visibility and coverage
Improved SecOps workflows
Decreased burden on teams so they can take on more strategic tasks
Ideally, most or all these measurements will be fulfilled. It is not enough for AI tools to benefit productivity and workflows in theory, but they must be practically implemented to provide return on investment.
Investigation before investment
Before investing in AI tools, buyers should ask questions pertaining to each stage of the adoption journey. The answers to these questions will not only help buyers gauge if a tool could be worth the investment, but also plan how the new tool will practically fit into the organization’s existing technology and workflows.
These questions are good to imagine how a tool will fit into your organization and determine if a vendor is worth further evaluation. Once you decide a tool has potential use and feasibility in your organization, it is time to dive deeper and learn more.
Ask vendors specific questions about their technology. This information will most likely not be on their websites, and since it involves intellectual property, it may require an NDA.
Find a longer list of questions to ask vendors and what to look for in their responses in the white paper “CISO’s Guide to Buying AI.”
Committing to transparency amidst the AI hype
For security teams to make the most out of new AI tools, they must trust the AI. Especially in an AI marketplace full of hype and obfuscation, transparency should be baked into both the descriptions of the AI tool and the tool’s functionality itself. With that in mind, here are some specifics about what techniques make up Darktrace’s AI.
Darktrace as an AI cybersecurity vendor
Darktrace has been using AI technology in cybersecurity for over 10 years. As a pioneer in the space, we have made innovation part of our process.
The Darktrace ActiveAI Security Platform™ uses multi-layered AI that trains on your unique business operations data for tailored security across the enterprise. This approach ensures that the strengths of one AI technique make up for the shortcomings of another, providing well-rounded and reliable coverage. Our models are always on and always learning, allowing your team to stop attacks in real time.
The machine learning techniques used in our solution include:
Unsupervised machine learning
Multiple Clustering Techniques
Multiple anomaly detection models in tandem analyzing data across hundreds of metrics
Bayesian probabilistic methods
Bayesian metaclassifier for autonomous fine-tuning of unsupervised machine learning models
Deep learning engines
Graph theory
Applied supervised machine learning for investigative AI
Neural networks
Reinforcement Learning
Generative and applied AI
Natural Language Processing (NLP) and Large Language Models (LLMs)
Post-processing models
Additionally, since Darktrace focuses on using the customer’s data across its entire digital estate, it brings a range of advantages in data privacy, interpretability, and data transfer costs.
Building trust with Darktrace AI
Darktrace further supports the human security team’s adoption of our technology by building trust. To do that, we designed our platform to give your team visibility and control over the AI.
Instead of functioning as a black box, our products focus on interpretability and sharing confidence levels. This includes specifying the threshold of what triggered a certain alert and the details of the AI Analyst’s investigations to see how it reached its conclusions. The interpretability of our AI uplevels and upskills the human security team with more information to drive investigations and remediation actions.
For complete control, the human security team can modify all the detection and response thresholds for our model alerts to customize them to fit specific business preferences.
Conclusion
CISO’s are increasingly considering investing in AI cybersecurity tools, but in this rapidly growing field, it’s not always clear what to look for.
Buyers should first determine their goals for a new AI tool, then research possible vendors by reviewing validation and asking deeper questions. This will reveal if a tool is a good match for the organization to move forward with investment and adoption.
As leaders in the AI cybersecurity industry, Darktrace is always ready to help you on your AI journey.
Triaging Triada: Understanding an Advanced Mobile Trojan and How it Targets Communication and Banking Applications
The rise of android malware
Recently, there has been a significant increase in malware strains targeting mobile devices, with a growing number of Android-based malware families, such as banking trojans, which aim to steal sensitive banking information from organizations and individuals worldwide.
These malware families attempt to access users’ accounts to steal online banking credentials and cookies, bypass multi-factor authentication (MFA), and conduct automatic transactions to steal funds [1]. They often masquerade as legitimate software or communications from social media platforms to compromise devices. Once installed, they use tactics such as keylogging, dumping cached credentials, and searching the file system for stored passwords to steal credentials, take over accounts, and potentially perform identity theft [1].
One recent example is the Antidot Trojan, which infects devices by disguising itself as an update page for Google Play. It establishes a command-and-control (C2) channel with a server, allowing malicious actors to execute commands and collect sensitive data [2].
Despite these malware’s ability to evade detection by standard security software, for example, by changing their code [3], Darktrace recently detected another Android malware family, Triada, communicating with a C2 server and exfiltrating data.
Triada: Background and tactics
First surfacing in 2016, Triada is a modular mobile trojan known to target banking and financial applications, as well as popular communication applications like WhatsApp, Facebook, and Google Mail [4]. It has been deployed as a backdoor on devices such as CTV boxes, smartphones, and tablets during the supply chain process [5]. Triada can also be delivered via drive-by downloads, phishing campaigns, smaller trojans like Leech, Ztorg, and Gopro, or more recently, as a malicious module in applications such as unofficial versions of WhatsApp, YoWhatsApp, and FM WhatsApp [6] [7].
How does Triada work?
Once downloaded onto a user’s device, Triada collects information about the system, such as the device’s model, OS version, SD card space, and list of installed applications, and sends this information to a C2 server. The server then responds with a configuration file containing the device’s personal identification number and settings, including the list of modules to be installed.
After a device has been successfully infected by Triada, malicious actors can monitor and intercept incoming and outgoing texts (including two-factor authentication messages), steal login credentials and credit card information from financial applications, divert in-application purchases to themselves, create fake messaging and email accounts, install additional malicious applications, infect devices with ransomware, and take control of the camera and microphone [4] [7].
For devices infected by unofficial versions of WhatsApp, which are downloaded from third-party app stores [9] and from mobile applications such as Snaptube and Vidmate , Triada collects unique device identifiers, information, and keys required for legitimate WhatsApp to work and sends them to a remote server to register the device [7] [12]. The server then responds by sending a link to the Triada payload, which is downloaded and launched. This payload will also download additional malicious modules, sign into WhatsApp accounts on the target’s phone, and request the same permissions as the legitimate WhatsApp application, such as access to SMS messages. If granted, a malicious actor can sign the user up for paid subscriptions without their knowledge. Triada then collects information about the user’s device and mobile operator and sends it to the C2 server [9] [12].
How does Triada avoid detection?
Triada evades detection by modifying the Zygote process, which serves as a template for every application in the Android OS. This enables the malware to become part of every application launched on a device [3]. It also substitutes system functions and conceals modules from the list of running processes and installed apps, ensuring that the system does not raise the alarm [3]. Additionally, as Triada connects to a C2 server on the first boot, infected devices remain compromised even after a factory reset [4].
Triada attack overview
Across multiple customer deployments, devices were observed making a large number of connections to a range of hostnames, primarily over encrypted SSL and HTTPS protocols. These hostnames had never previously been observed on the customers’ networks and appear to be algorithmically generated. Examples include “68u91.66foh90o[.]com”, “92n7au[.]uhabq9[.]com”, “9yrh7.mea5ms[.]com”, and “is5jg.3zweuj[.]com”.
Most of the IP addresses associated with these hostnames belong to an ASN associated with the cloud provider Alibaba (i.e., AS45102 Alibaba US Technology Co., Ltd). These connections were made over a range of high number ports over 1000, most commonly over 30000 such as 32091, which Darktrace recognized as extremely unusual for the SSL and HTTPS protocols.
On several customer deployments, devices were seen exfiltrating data to hostnames which also appeared to be algorithmically generated. This occurred via HTTP POST requests containing unusual URI strings that were made without a prior GET request, indicating that the infected device was using a hardcoded list of C2 servers.
These connections correspond with reports that devices affected by Triada communicate with the C2 server to transmit their information and receive instructions for installing the payload.
A number of these endpoints have communicating files associated with the unofficial WhatsApp versions YoWhatsApp and FM WhatsApp [11] [12] [13] . This could indicate that the devices connecting to these endpoints were infected via malicious modules in the unofficial versions of WhatsApp, as reported by open-source intelligence (OSINT) [10] [12]. It could also mean that the infected devices are using these connections to download additional files from the C2 server, which could infect systems with additional malicious modules related to Triada.
Moreover, on certain customer deployments, shortly before or after connecting to algorithmically generated hostnames with communicating files linked to YoWhatsApp and FM WhatsApp, devices were also seen connecting to multiple endpoints associated with WhatsApp and Facebook.
These surrounding connections indicate that Triada is attempting to sign in to the users’ WhatsApp accounts on their mobile devices to request permissions such as access to text messages. Additionally, Triada sends information about users’ devices and mobile operators to the C2 server.
The connections made to the algorithmically generated hostnames over SSL and HTTPS protocols, along with the HTTP POST requests, triggered multiple Darktrace models to alert. These models include those that detect connections to potentially algorithmically generated hostnames, connections over ports that are highly unusual for the protocol used, unusual connectivity over the SSL protocol, and HTTP POSTs to endpoints that Darktrace has determined to be rare for the network.
Conclusion
Recently, the use of Android-based malware families, aimed at stealing banking and login credentials, has become a popular trend among threat actors. They use this information to perform identity theft and steal funds from victims worldwide.
Across affected customers, multiple devices were observed connecting to a range of likely algorithmically generated hostnames over SSL and HTTPS protocols. These devices were also seen sending data out of the network to various hostnames via HTTP POST requests without first making a GET request. The URIs in these requests appeared to be algorithmically generated, suggesting the exfiltration of sensitive network data to multiple Triada C2 servers.
This activity highlights the sophisticated methods used by malware like Triada to evade detection and exfiltrate data. It underscores the importance of advanced security measures and anomaly-based detection systems to identify and mitigate such mobile threats, protecting sensitive information and maintaining network integrity.
Credit to: Justin Torres (Senior Cyber Security Analyst) and Charlotte Thompson (Cyber Security Analyst).
Appendices
Darktrace Model Detections
Model Alert Coverage
Anomalous Connection / Application Protocol on Uncommon Port
Anomalous Connection / Multiple Connections to New External TCP Port
Anomalous Connection / Multiple HTTP POSTS to Rare Hostname
Anomalous Connections / Multiple Failed Connections to Rare Endpoint
Anomalous Connection / Suspicious Expired SSL
Compromise / DGA Beacon
Compromise / Domain Fluxing
Compromise / Fast Beaconing to DGA
Compromise / Sustained SSL or HTTP Increase
Compromise / Unusual Connections to Rare Lets Encrypt
Unusual Activity / Unusual External Activity
AI Analyst Incident Coverage
Unusual Repeated Connections to Multiple Endpoints