top of page

Staying ahead of cybersecurity threats in the age of AI


Over the past year, the speed, scale, and sophistication of cyberattacks have escalated alongside the rapid growth and adoption of AI. Defenders are just beginning to harness the potential of generative AI to shift the cybersecurity landscape in their favor and stay ahead of adversaries. It is equally important to recognize how AI could be exploited by malicious actors. In partnership with OpenAI, we are publishing research today on emerging threats in the AI era, focusing on activities linked to known threat actors, including prompt injections, misuse attempts of large language models (LLMs), and fraud. Our analysis shows that attackers are leveraging AI as a productivity tool within the offensive landscape. You can read OpenAI’s blog on the research here.


So far, Microsoft and OpenAI have not identified any novel or unique AI-enabled attack techniques from threat actors, but we continue to closely monitor the evolving landscape. The goal of Microsoft’s collaboration with OpenAI, including this research release, is to ensure the safe and responsible use of AI technologies, such as ChatGPT, adhering to the highest ethical standards to protect against misuse. As part of this effort, we have taken steps to disrupt assets and accounts linked to threat actors, enhance the protection of OpenAI’s LLM technology and its users, and reinforce the safety measures around our models. Additionally, we are committed to leveraging generative AI to counter threat actors and using new tools, like Microsoft Copilot for Security, to empower defenders globally.



A disciplined strategy for identifying and preventing threat actors


The advancement of technology drives the need for robust cybersecurity and safety protocols. For instance, the White House's Executive Order on AI mandates thorough safety testing and government oversight for AI systems that significantly impact national security, economic stability, or public health and safety. Our efforts to strengthen the protections of our AI models and collaborate with partners on their safe development, deployment, and use are aligned with the Executive Order’s call for comprehensive AI safety and security standards.


As part of Microsoft’s leadership in AI and cybersecurity, we are introducing principles that guide our policies and actions to mitigate risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal organizations we monitor.


These principles include:


Detection and action against malicious use: Upon identifying malicious use of any Microsoft AI APIs, services, or systems by a known threat actor, such as nation-state APTs, APMs, or cybercrime syndicates, Microsoft will take appropriate measures, including disabling accounts, terminating services, or restricting resource access to disrupt their activities.

Notification to other AI service providers: If a threat actor is detected using another service provider's AI, APIs, or systems, Microsoft will promptly notify the provider and share relevant data, enabling them to verify our findings and take appropriate action based on their policies.


Collaboration with stakeholders: Microsoft will work with other stakeholders to share information regularly on the use of AI by threat actors. This collaboration fosters coordinated and effective responses to risks across the ecosystem.


Transparency: Microsoft will inform the public and relevant stakeholders about actions taken under these principles, including details on threat actors' use of AI in our systems and the corresponding measures taken, where appropriate.


Microsoft remains dedicated to responsible AI innovation, prioritizing the safety, integrity, and ethical use of our technologies. These principles build on Microsoft's Responsible AI practices, voluntary commitments to advancing responsible AI, and the Azure OpenAI Code of Conduct. They also support their broader commitment to strengthening international law and norms, as well as advancing the goals of the Bletchley Declaration, endorsed by 29 countries.



Cybersecurity


The combined defenses of Microsoft and OpenAI safeguard AI platforms


Due to the security focus of the Microsoft and OpenAI partnership, the companies can take action when both known and emerging threat actors are identified. Microsoft Threat Intelligence monitors over 300 distinct threat actors, including 160 nation-state groups and 50 ransomware organizations, among others. These adversaries use various digital identities and attack infrastructures. Microsoft’s experts, alongside automated systems, continuously analyze and correlate these patterns, revealing attempts by attackers to evade detection or enhance their capabilities with new technologies. In alignment with efforts to thwart threat actors across our platforms and in collaboration with partners, Microsoft actively studies the use of AI and LLMs by adversaries, working with OpenAI to monitor attack activity and leveraging this knowledge to strengthen defenses.


This blog outlines activities observed from known threat actor infrastructures identified by Microsoft Threat Intelligence and shared with OpenAI to detect potential misuse or abuse of their platform, protecting our mutual customers from future threats.


Given the rapid rise of AI and the growing use of LLMs in cyber operations, we are continuing our work with MITRE to incorporate these LLM-related tactics, techniques, and procedures (TTPs) into the MITRE ATT&CK® framework and MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) knowledge base. This strategic expansion demonstrates our commitment not only to tracking and neutralizing threats but also to leading the development of countermeasures in the evolving landscape of AI-driven cyber operations.



Overview of Microsoft and OpenAI’s findings and threat intelligence insights


Over the past several years, the threat landscape has consistently shown that threat actors follow technological trends in parallel with their defender counterparts. Like defenders, threat actors are exploring AI, including large language models (LLMs), to increase their efficiency and leverage accessible platforms to advance their objectives and attack techniques. Cybercriminal groups, nation-state actors, and other adversaries are actively experimenting with emerging AI technologies to assess their potential value for operations and to identify security controls they may need to bypass. On the defense side, strengthening these same security controls and implementing advanced monitoring systems that anticipate and block malicious activity is crucial.


While threat actors’ motives and levels of sophistication may vary, their key tasks during attacks remain similar. These tasks include reconnaissance, such as gathering information about potential targets' industries, locations, and relationships; coding assistance, including improving scripts and developing malware; and using native languages more effectively. LLMs, with their built-in language support, appeal to threat actors focused on social engineering and other tactics involving deceptive communications tailored to their targets’ roles, professional networks, and relationships.


Notably, our research with OpenAI has not identified significant attacks involving the LLMs we closely monitor. However, we believe it is important to share this research to highlight early, incremental steps we have observed well-known threat actors taking, and to inform the defender community about how we are blocking and countering these moves.


While attackers will likely continue exploring AI and testing the limits of its capabilities and security controls, it is essential to view these risks in context. Basic cybersecurity practices such as multi-factor authentication (MFA) and Zero Trust defenses remain critical, as attackers may use AI tools to enhance traditional cyberattacks that rely on social engineering or exploiting unsecured devices and accounts.


The threat actors profiled below represent a sample of the observed activities that we believe best reflect the tactics, techniques, and procedures (TTPs) the industry will need to track more closely using updates to the MITRE ATT&CK® framework or the MITRE ATLAS™ knowledge base.



Forest Blizzard 


Forest Blizzard (STRONTIUM) is a Russian military intelligence group tied to GRU Unit 26165, known for targeting entities of both tactical and strategic value to the Russian government. Their activities span various sectors, including defense, transportation/logistics, government, energy, non-governmental organizations (NGOs), and information technology. Throughout the conflict in Ukraine, Forest Blizzard has been highly active in targeting organizations involved in or connected to the war, with Microsoft assessing that their operations play a critical role in advancing Russia’s foreign policy and military objectives, both in Ukraine and globally. Forest Blizzard is also known as APT28 and Fancy Bear in another research.


Forest Blizzard's use of large language models (LLMs) has focused on researching satellite and radar technologies relevant to conventional military operations in Ukraine, as well as conducting general research to support their cyber activities. Based on these observations, we classify their tactics, techniques, and procedures (TTPs) as follows:


LLM-informed reconnaissance: Engaging with LLMs to gather information on satellite communication protocols, radar imaging technologies, and specific technical parameters, indicating an effort to gain detailed knowledge of satellite capabilities.

LLM-enhanced scripting techniques: Leveraging LLMs for basic scripting tasks such as file manipulation, data selection, regular expressions, and multiprocessing, likely aimed at automating or optimizing their technical operations.


Microsoft has observed Forest Blizzard experimenting with these use cases of new technology. All accounts and assets associated with Forest Blizzard have been disabled.



Cybersecurity


Emerald Sleet


Emerald Sleet (THALLIUM) is a North Korean threat actor that has remained highly active throughout 2023. Recent operations involved spear-phishing emails aimed at compromising and gathering intelligence from prominent individuals with expertise on North Korea. Microsoft observed Emerald Sleet impersonating well-known academic institutions and NGOs to deceive victims into sharing expert insights and commentary on foreign policies related to North Korea. This group is also known as Kimsuky and Velvet Chollima by other researchers.


Emerald Sleet’s use of large language models (LLMs) has supported these operations through research into think tanks and North Korea experts, as well as generating content likely intended for spear-phishing campaigns. They have also leveraged LLMs to understand publicly known vulnerabilities, troubleshoot technical issues, and seek assistance with various web technologies. Based on these activities, we categorize their tactics, techniques, and procedures (TTPs) as follows:


LLM-assisted vulnerability research: Using LLMs to gain a deeper understanding of publicly reported vulnerabilities, such as the CVE-2022-30190 Microsoft Support Diagnostic Tool (MSDT) vulnerability, also known as “Follina.”

LLM-enhanced scripting techniques: Employing LLMs for basic scripting tasks, such as programmatically identifying specific user events on systems, and for troubleshooting and understanding different web technologies.


LLM-supported social engineering: Utilizing LLMs to help draft and generate content likely intended for spear-phishing campaigns targeting individuals with regional expertise.


LLM-informed reconnaissance: Engaging with LLMs to identify think tanks, government organizations, or experts focused on defense issues or North Korea’s nuclear weapons program.


All accounts and assets associated with Emerald Sleet have been disabled.



Crimson Sandstorm


Crimson Sandstorm (CURIUM) is an Iranian threat actor believed to be linked to the Islamic Revolutionary Guard Corps (IRGC). Active since at least 2017, the group has targeted a variety of sectors, including defense, maritime shipping, transportation, healthcare, and technology. Their attacks often utilize watering hole tactics and social engineering to deploy custom .NET malware. Previous research has also identified their use of email-based command-and-control (C2) channels. Crimson Sandstorm is also tracked by other researchers under names like Tortoiseshell, Imperial Kitten, and Yellow Liderc.


Crimson Sandstorm’s use of large language models (LLMs) aligns with broader patterns observed in their operations. Their interactions have included seeking help with social engineering, troubleshooting errors, .NET development, and finding ways to evade detection on compromised systems. These TTPs are categorized as follows:


LLM-supported social engineering: Using LLMs to craft phishing emails, including one posing as an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism.


LLM-enhanced scripting techniques: Employing LLMs to generate code for app and web development, interacting with remote servers, web scraping, executing tasks upon user sign-in, and sending system data via email.


LLM-enhanced anomaly detection evasion: Seeking LLM assistance in developing code to evade detection, disabling antivirus via registry or Windows policies, and deleting files after an application is closed.


All accounts and assets associated with Crimson Sandstorm have been disabled.



Charcoal Typhoon


Charcoal Typhoon (CHROMIUM) is a Chinese state-affiliated threat actor with a wide operational focus. They target various sectors, including government, higher education, communications infrastructure, oil & gas, and information technology. Their activities have primarily affected entities in Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal, and they also show interest in institutions and individuals worldwide who oppose Chinese policies. Charcoal Typhoon is also known under other names such as Aquatic Panda, ControlX, RedHotel, and BRONZE UNIVERSITY.


Recently, Charcoal Typhoon has been observed using large language models (LLMs) in a manner that indicates an initial exploration of how LLMs might enhance their technical operations. Their interactions with LLMs have involved using them for tooling development, scripting, understanding cybersecurity tools, and generating content for social engineering. We categorize these tactics, techniques, and procedures (TTPs) as follows:


LLM-informed reconnaissance: Using LLMs to research and comprehend specific technologies, platforms, and vulnerabilities, indicative of early-stage information gathering.


LLM-enhanced scripting techniques: Employing LLMs to create and refine scripts, potentially to automate and streamline complex cyber tasks and operations.


LLM-supported social engineering: Leveraging LLMs for translations and communication, likely to aid in establishing connections or manipulating targets.


LLM-refined operational command techniques: Utilizing LLMs for advanced commands and deeper system access, reflecting post-compromise activities.


All accounts and assets associated with Charcoal Typhoon have been disabled, underscoring our commitment to protecting against the misuse of AI technologies.



Salmon Typhoon


Salmon Typhoon (SODIUM) is an advanced Chinese state-affiliated threat actor known for targeting US defense contractors, government agencies, and organizations within the cryptographic technology sector. This actor has demonstrated its capabilities through malware deployments, such as Win32/Wkysol, to maintain remote access to compromised systems. With over a decade of operations characterized by periods of activity and dormancy, Salmon Typhoon has recently exhibited renewed engagement. This threat actor is also recognized as APT4 and Maverick Panda by other researchers.


In 2023, Salmon Typhoon’s interactions with large language models (LLMs) appear exploratory, suggesting an evaluation of LLMs for gathering information on sensitive topics, high-profile individuals, regional geopolitics, US influence, and internal affairs. This tentative engagement may indicate an expansion of their intelligence-gathering methods and experimentation with new technologies.


Based on these observations, we categorize their tactics, techniques, and procedures (TTPs) as follows:


LLM-informed reconnaissance: Using LLMs to query a wide range of subjects, including global intelligence agencies, domestic issues, notable individuals, cybersecurity topics, strategic interests, and various threat actors. This resembles the use of a search engine for public domain research.


LLM-enhanced scripting techniques: Leveraging LLMs to identify and resolve coding errors. Microsoft observed requests for support in developing potentially malicious code, but the model adhered to ethical guidelines and refused to assist with such requests.


LLM-refined operational command techniques: Showing interest in specific file types and concealment tactics within operating systems, indicating efforts to refine command execution.


LLM-aided technical translation and explanation: Utilizing LLMs for translating computing terms and technical documents.


Salmon Typhoon’s engagement with LLMs reflects traditional behaviors adapted to new technological contexts. In response, all accounts and assets associated with Salmon Typhoon have been disabled.


As AI technologies continue to advance, threat actors will keep exploring them. Microsoft will persist in monitoring threat actors and malicious activities involving LLMs, collaborating with OpenAI and other partners to share intelligence, enhance customer protections, and support the broader security community.


Learn more here.

0 views0 comments
bottom of page