As one of many defining applied sciences of this century, synthetic intelligence (AI) appears to witness each day developments with new entrants to the sphere, technological breakthroughs, and artistic and progressive purposes. The panorama for AI safety shares the identical breakneck tempo with streams of newly proposed laws, novel vulnerability discoveries, and rising menace vectors.
While the pace of change is thrilling, it creates sensible limitations for enterprise AI adoption. As our Cisco 2024 AI Readiness Index factors out, issues about AI safety are regularly cited by enterprise leaders as a major roadblock to embracing the total potential of AI of their organizations.
That’s why we’re excited to introduce our inaugural State of AI Security report. It supplies a succinct, easy overview of among the most essential developments in AI safety from the previous yr, together with tendencies and predictions for the yr forward. The report additionally shares clear suggestions for organizations trying to enhance their very own AI safety methods, and highlights among the methods Cisco is investing in a safer future for AI.
Here’s an summary of what you’ll discover in our first State of AI Security report:
Evolution of the AI Threat Landscape
The speedy proliferation of AI and AI-enabled applied sciences has launched an enormous new assault floor that safety leaders are solely starting to take care of.
Risk exists at just about each step throughout your entire AI improvement lifecycle; AI belongings will be immediately compromised by an adversary or discreetly compromised although a vulnerability within the AI provide chain. The State of AI Security report examines a number of AI-specific assault vectors together with immediate injection assaults, information poisoning, and information extraction assaults. It additionally displays on the usage of AI by adversaries to enhance cyber operations like social engineering, supported by analysis from Cisco Talos.
Looking on the yr forward, cutting-edge developments in AI will undoubtedly introduce new dangers for safety leaders to pay attention to. For instance, the rise of agentic AI which might act autonomously with out fixed human supervision appears ripe for exploitation. On the opposite hand, the scale of social engineering threatens to develop tremendously, exacerbated by highly effective multimodal AI instruments within the mistaken palms.
Key Developments in AI Policy
The previous yr has seen important developments in AI coverage, each domestically and internationally.
In the United States, a fragmented state-by-state strategy has emerged within the absence of federal laws with over 700 AI-related payments launched in 2024 alone. Meanwhile, worldwide efforts have led to key developments, such because the UK and Canada’s collaboration on AI security and the European Union’s AI Act, which got here into drive in August 2024 to set a precedent for international AI governance.
Early actions in 2025 counsel larger focus in the direction of successfully balancing the necessity for AI safety with accelerating the pace of innovation. Recent examples embody President Trump’s govt order and rising assist for a pro-innovation surroundings, which aligns nicely with themes from the AI Action Summit held in Paris in February and the U.Ok.’s latest AI Opportunities Action Plan.
Original AI Security Research
The Cisco AI safety analysis group has led and contributed to a number of items of groundbreaking analysis that are highlighted within the State of AI Security report.
Research into algorithmic jailbreaking of enormous language fashions (LLMs) demonstrates how adversaries can bypass mannequin protections with zero human supervision. This method can be utilized to exfiltrate delicate information and disrupt AI companies. More not too long ago, the group explored automated jailbreaking of superior reasoning fashions like DeepSeek R1, to exhibit that even reasoning fashions can nonetheless fall sufferer to conventional jailbreaking methods.
The group additionally explores the security and safety dangers of fine-tuning fashions. While fine-tuning is a well-liked methodology for enhancing the contextual relevance of AI, many are unaware of the inadvertent penalties like mannequin misalignment.
Finally, the report evaluations two items of authentic analysis into poisoning public datasets and extracting coaching information from LLMs. These research make clear how simply—and cost-effectively—a nasty actor can tamper with or exfiltrate information from enterprise AI purposes.
Recommendations for AI Security
Securing AI techniques requires a proactive and complete strategy.
The State of AI Security report outlines a number of actionable suggestions, together with managing safety dangers all through the AI lifecycle, implementing robust entry controls, and adopting AI safety requirements such because the NIST AI Risk Management Framework and MITRE ATLAS matrix. We additionally have a look at how Cisco AI Defense will help companies adhere to those finest practices and mitigate AI threat from improvement to deployment.
Read the State of AI Security 2025
Ready to learn the total report? You can discover it right here.
We’d love to listen to what you assume. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!
Cisco Security Social Channels
Instagram
Facebook
Twitter
LinkedIn
Share: