The stakes of one thing going fallacious with AI are extremely excessive. Only 29% of organizations really feel totally outfitted to detect and stop unauthorized tampering with AI[1]. With AI, rising dangers goal totally different phases of the AI lifecycle, whereas duty lies with totally different homeowners together with builders, finish customers and distributors.
As AI turns into ubiquitous, enterprises will use and develop tons of if not 1000’s of AI purposes. Developers want AI safety and security guardrails that work for each software. In parallel, deployers and finish customers are speeding to undertake AI to enhance productiveness, probably exposing their group to knowledge leakage or the poisoning of proprietary knowledge. This provides to the rising dangers associated to organizations shifting past public knowledge to coach fashions on their proprietary knowledge.
So, how can we make sure the safety of AI methods? How to guard AI from unauthorized entry and misuse? Or stop knowledge from leaking? Ensuring the safety and moral use of AI methods has grow to be a essential precedence. The European Union has taken important steps on this route with the introduction of the EU AI Act.
This weblog explores how the AI Act addresses safety for AI methods and fashions, the significance of AI literacy amongst workers, and Cisco’s method for safeguarding AI by a holistic AI Defense imaginative and prescient.
The EU AI Act: A Framework for Secure AI
The EU AI Act represents a landmark effort by the EU to create a structured method to AI governance. One of its elements is its emphasis on cybersecurity necessities for high-risk AI methods. This consists of mandating sturdy safety protocols to forestall unauthorized entry and misuse, guaranteeing that AI methods function safely and predictably.
The Act promotes human oversight, recognizing that whereas AI can drive efficiencies, human judgment stays indispensable in stopping and mitigating dangers. It additionally acknowledges the necessary function of all workers in guaranteeing safety, requiring each suppliers and deployers to take measures to make sure a adequate degree of AI literacy of their workers.
Identifying and clarifying roles and tasks in securing AI methods is complicated. The AI Act main focus is on the builders of AI methods and sure common objective AI mannequin suppliers, though it rightly acknowledges the shared duty between builders and deployers, underscoring the complicated nature of the AI worth chain.
Cisco’s Vision for Securing AI
In response to the rising want for AI safety, Cisco has envisioned a complete method to defending the event, deployment and use of AI purposes. This imaginative and prescient builds on 5 key points of AI safety, from securing entry to AI purposes, to detecting dangers resembling knowledge leakage and complex adversarial threats, all the best way to coaching workers.
“When embracing AI, organizations should not have to choose between speed and safety. In a dynamic landscape where competition is fierce, effectively securing technology throughout their lifecycle and without tradeoffs is how Cisco reimages security for the age of AI.”
- Automated Vulnerability Assessment: By utilizing AI-driven methods, organizations can routinely and constantly assess AI fashions and purposes for vulnerabilities. This helps determine tons of of potential security and safety dangers, empowering safety groups to proactively handle them.
- Runtime Security: Implementing protections through the operation of AI methods helps defend in opposition to evolving threats like denial of service, and delicate knowledge leakage, and ensures these methods run safely.
- User Protections and Data Loss Prevention: Organizations want instruments that stop knowledge loss and monitor unsafe behaviors. Companies want to make sure AI purposes are utilized in compliance with inner insurance policies and regulatory necessities.
- Managing Shadow AI: It’s essential to watch and management unauthorized AI purposes, often known as shadow AI. Identifying third-party apps utilized by workers helps corporations implement insurance policies to limit entry to unauthorized instruments, defending confidential data and guaranteeing compliance.
- Citizens and workers coaching: Alongside the suitable technological options, AI literacy amongst workers is essential for the secure and efficient use of AI. Increasing AI literacy helps construct a workforce able to responsibly managing AI instruments, understanding their limitations, and recognizing potential dangers. This, in flip, helps organizations adjust to regulatory necessities and fosters a tradition of AI safety and moral consciousness.
“The EU AI Act underscores the importance of equipping employees with more than just technical knowledge. It’s about implementing a holistic approach to AI literacy that also covers security and ethical considerations. This helps ensure that users are better prepared to safely handle AI and to harness the potential of this revolutionary technology.”
This imaginative and prescient is embedded in Cisco’s new expertise resolution “AI Defense”. In the multifaceted quest to safe AI applied sciences, rules just like the EU AI Act, alongside coaching for residents and workers, and improvements like Cisco’s AI Defense all play an necessary function.
As AI continues to rework every trade, these efforts are important to making sure that AI is used safely, ethically, and responsibly, in the end safeguarding each organizations and customers within the digital age.
[1] Cisco’s 2024 AI Readiness Index
Share: