The cybersecurity landscape is undergoing a profound transformation, with artificial intelligence (AI) at its core, presenting both challenges and opportunities. While AI can empower organizations to combat cyberattacks at machine speed and revolutionize threat detection, hunting, and incident response, it can also be exploited by adversaries. This makes it more critical than ever to design, deploy, and use AI securely.
At Microsoft, we are exploring AI’s potential to strengthen our security measures, unlock advanced protections, and improve software development. AI offers the ability to adapt to evolving threats, detect anomalies in real-time, respond swiftly to neutralize risks, and tailor defenses to meet an organization’s specific needs. Additionally, AI can help address one of the industry's most pressing issues: the global cybersecurity workforce shortage. With approximately 4 million cybersecurity professionals needed worldwide, AI has the potential to close the talent gap and boost defenders' productivity. For instance, a recent study showed that Microsoft’s Copilot for Security increased security analysts’ accuracy by 44 percent and speed by 26 percent, regardless of their expertise level.
As we look to secure the future, it’s vital to balance leveraging AI’s benefits while preparing for AI-driven threats. AI holds the power to enhance human potential and address some of our greatest challenges. Achieving a more secure future will require fundamental advances in software engineering and a deep understanding of AI-driven threats as integral parts of any security strategy. Equally important is building strong partnerships between the public and private sectors to combat malicious actors effectively.
Security Snapshot
Traditional security tools are no longer sufficient to keep up with the evolving threats posed by cybercriminals. The growing speed, scale, and sophistication of cyberattacks require a more advanced approach to cybersecurity. Compounding this challenge is the global shortage of cybersecurity professionals, making it increasingly urgent to address the skills gap as cyberthreats become more frequent and severe.
AI offers a game-changing advantage for defenders. A recent study on Microsoft Copilot for Security, currently in customer preview, demonstrated that AI can significantly improve the speed and accuracy of security analysts, regardless of their experience level. Tasks such as identifying malicious scripts, creating incident reports, and determining appropriate remediation steps were completed more efficiently, showcasing the potential of AI to enhance cybersecurity defenses.
Attackers are actively exploring a wide range of technologies
The cyberthreat landscape has become increasingly complex, with attackers growing more motivated, sophisticated, and well-resourced. Both threat actors and defenders are turning to AI, including large language models (LLMs), to boost their capabilities and leverage accessible platforms for their objectives and attack techniques. In response to this rapidly evolving environment, Microsoft is introducing principles to guide actions that mitigate the risk of AI misuse by threat actors, such as advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates. These principles focus on identifying and taking action against malicious use of AI, notifying other AI service providers, fostering collaboration with stakeholders, and maintaining transparency.
Despite varying motives and levels of sophistication, threat actors share common tasks when conducting attacks. These include reconnaissance—researching potential victims’ industries, locations, and relationships—along with coding to enhance software scripts and malware development and seeking assistance with mastering both human and machine languages.
Threat Briefing
Nation-states are increasingly attempting to harness AI technologies for their cyberoperations. In partnership with OpenAI, Microsoft is sharing threat intelligence that highlights state-affiliated adversaries—such as Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon—leveraging large language models (LLMs) to enhance their activities. The objective of this research collaboration is to ensure the safe, responsible use of AI, maintaining the highest ethical standards to prevent misuse.
Forest Blizzard (STRONTIUM), a Russian military intelligence group linked to GRU Unit 26165, has targeted sectors of strategic interest to Russia, including defense, energy, and IT. Meanwhile, Emerald Sleet (Velvet Chollima), a North Korean threat actor, has been impersonating academic institutions to gather intelligence on foreign policies related to North Korea, with LLMs supporting their spear-phishing campaigns.
Crimson Sandstorm (CURIUM), an Iranian actor connected to the Islamic Revolutionary Guard Corps (IRGC), has used LLMs for social engineering, coding support, and detection evasion. Charcoal Typhoon (CHROMIUM), a Chinese-affiliated actor, has focused on sectors in Taiwan and other regions, using LLMs for research and reconnaissance. Salmon Typhoon, another China-backed group, has explored LLMs for intelligence gathering on sensitive topics like geopolitics and US affairs.
Although significant AI-driven attacks have not yet been detected, Microsoft has taken proactive steps to disable threat actors' assets and accounts, reinforcing safeguards around AI platforms. Additionally, AI-powered fraud is another emerging concern, with technologies like voice synthesis posing new challenges in identity verification and fraud prevention.
Microsoft remains committed to ensuring responsible AI use with human oversight, focusing on security and privacy while countering these evolving threats.
Protecting against attacks
Microsoft detects an enormous amount of malicious activity—over 65 trillion cybersecurity signals daily. AI enhances our capacity to analyze this data, ensuring that the most critical insights are surfaced to help stop threats. This intelligence fuels Generative AI for advanced threat protection, data security, and identity security, enabling defenders to catch threats that might otherwise go unnoticed.
To safeguard itself and its customers, Microsoft employs various methods, including AI-powered threat detection to monitor changes in network traffic and resource usage, behavioral analytics to spot risky sign-ins and anomalies, and machine learning models to identify malware and risky access attempts. Zero Trust security ensures every access request is fully authenticated, authorized, and encrypted, with device health verification required before any connection to the corporate network.
As threat actors adapt to Microsoft's stringent use of multifactor authentication (MFA) and password-less security, we've seen an increase in social engineering attacks targeting our employees, particularly in high-value areas like free trials or promotional offers. In these cases, attackers attempt to scale and automate their efforts to avoid detection.
Microsoft combats these threats by developing AI models to detect fake accounts, whether students, organizations, or companies that have falsified data or concealed their true identities to evade sanctions, circumvent controls, or hide criminal histories like corruption or theft. Tools such as GitHub Copilot, Microsoft Copilot for Security, and other integrated chat features further enhance our internal security infrastructure, helping prevent incidents that could disrupt operations.
A key vulnerability threat actors exploit is human error. History shows that public awareness campaigns can effectively change behavior. To combat email threats, Microsoft is enhancing its ability to detect malicious content by analyzing signals beyond the email's text. With threat actors increasingly using AI, phishing emails have become harder to identify as they now lack the typical language and grammatical errors that often serve as warning signs.
Ongoing employee education and public awareness campaigns are essential to counter social engineering, which remains a primary tactic used in cyberattacks.
Learn more here.