A profitable AI transformation begins with a powerful safety basis. With a speedy improve in AI improvement and adoption, organizations want visibility into their rising AI apps and instruments. Microsoft Security offers menace safety, posture administration, information safety, compliance, and governance to safe AI purposes that you simply construct and use. These capabilities can be used to assist enterprises safe and govern AI apps constructed with the DeepSeeokay R1 mannequin and acquire visibility and management over using the seperate DeepSeeokay shopper app.
Secure and govern AI apps constructed with the DeepSeeokay R1 mannequin on Azure AI Foundry and GitHub
Develop with reliable AI
Last week, we introduced DeepSeeokay R1’s availability on Azure AI Foundry and GitHub, becoming a member of a various portfolio of greater than 1,800 fashions.
Customers at this time are constructing production-ready AI purposes with Azure AI Foundry, whereas accounting for his or her various safety, security, and privateness necessities. Similar to different fashions supplied in Azure AI Foundry, DeepSeeokay R1 has undergone rigorous purple teaming and security evaluations, together with automated assessments of mannequin conduct and in depth safety critiques to mitigate potential dangers. Microsoft’s internet hosting safeguards for AI fashions are designed to maintain buyer information inside Azure’s safe boundaries.
With Azure AI Content Safety, built-in content material filtering is obtainable by default to assist detect and block malicious, dangerous, or ungrounded content material, with opt-out choices for flexibility. Additionally, the security analysis system permits prospects to effectively take a look at their purposes earlier than deployment. These safeguards assist Azure AI Foundry present a safe, compliant, and accountable setting for enterprises to confidently construct and deploy AI options. See Azure AI Foundry and GitHub for extra particulars.
Start with Security Posture Management
AI workloads introduce new cyberattack surfaces and vulnerabilities, particularly when builders leverage open-source sources. Therefore, it’s vital to begin with safety posture administration, to find all AI inventories, akin to fashions, orchestrators, grounding information sources, and the direct and oblique dangers round these parts. When builders construct AI workloads with DeepSeeokay R1 or different AI fashions, Microsoft Defender for Cloud’s AI safety posture administration capabilities may also help safety groups acquire visibility into AI workloads, uncover AI cyberattack surfaces and vulnerabilities, detect cyberattack paths that may be exploited by unhealthy actors, and get suggestions to proactively strengthen their safety posture towards cyberthreats.

By mapping out AI workloads and synthesizing safety insights akin to identification dangers, delicate information, and web publicity, Defender for Cloud constantly surfaces contextualized safety points and suggests risk-based safety suggestions tailor-made to prioritize vital gaps throughout your AI workloads. Relevant safety suggestions additionally seem inside the Azure AI useful resource itself within the Azure portal. This offers builders or workload homeowners with direct entry to suggestions and helps them remediate cyberthreats sooner.
Safeguard DeepSeeokay R1 AI workloads with cyberthreat safety
While having a powerful safety posture reduces the chance of cyberattacks, the advanced and dynamic nature of AI requires lively monitoring in runtime as nicely. No AI mannequin is exempt from malicious exercise and could be susceptible to immediate injection cyberattacks and different cyberthreats. Monitoring the newest fashions is vital to making sure your AI purposes are protected.
Integrated with Azure AI Foundry, Defender for Cloud constantly screens your DeepSeeokay AI purposes for uncommon and dangerous exercise, correlates findings, and enriches safety alerts with supporting proof. This offers your safety operations heart (SOC) analysts with alerts on lively cyberthreats akin to jailbreak cyberattacks, credential theft, and delicate information leaks. For instance, when a immediate injection cyberattack happens, Azure AI Content Safety immediate shields can block it in real-time. The alert is then despatched to Microsoft Defender for Cloud, the place the incident is enriched with Microsoft Threat Intelligence, serving to SOC analysts perceive consumer behaviors with visibility into supporting proof, akin to IP tackle, mannequin deployment particulars, and suspicious consumer prompts that triggered the alert.

Additionally, these alerts combine with Microsoft Defender XDR, permitting safety groups to centralize AI workload alerts into correlated incidents to know the total scope of a cyberattack, together with malicious actions associated to their generative AI purposes.

Secure and govern using the DeepSeeokay app
In addition to the DeepSeeokay R1 mannequin, DeepSeeokay additionally offers a shopper app hosted on its native servers, the place information assortment and cybersecurity practices might not align together with your organizational necessities, as is commonly the case with consumer-focused apps. This underscores the dangers organizations face if workers and companions introduce unsanctioned AI apps resulting in potential information leaks and coverage violations. Microsoft Security offers capabilities to find using third-party AI purposes in your group and offers controls for safeguarding and governing their use.
Secure and acquire visibility into DeepSeeokay app utilization
Microsoft Defender for Cloud Apps offers ready-to-use threat assessments for greater than 850 Generative AI apps, and the listing of apps is up to date constantly as new ones develop into well-liked. This means that you would be able to uncover using these Generative AI apps in your group, together with the DeepSeeokay app, assess their safety, compliance, and authorized dangers, and arrange controls accordingly. For instance, for high-risk AI apps, safety groups can tag them as unsanctioned apps and block consumer’s entry to the apps outright.

Comprehensive information safety
In addition, Microsoft Purview Data Security Posture Management (DSPM) for AI offers visibility into information safety and compliance dangers, akin to delicate information in consumer prompts and non-compliant utilization, and recommends controls to mitigate the dangers. For instance, the reviews in DSPM for AI can supply insights on the kind of delicate information being pasted to Generative AI shopper apps, together with the DeepSeeokay shopper app, so information safety groups can create and fine-tune their information safety insurance policies to guard that information and stop information leaks.

Prevent delicate information leaks and exfiltration
The leakage of organizational information is among the many high issues for safety leaders concerning AI utilization, highlighting the significance for organizations to implement controls that forestall customers from sharing delicate data with exterior third-party AI purposes.
Microsoft Purview Data Loss Prevention (DLP) lets you forestall customers from pasting delicate information or importing recordsdata containing delicate content material into Generative AI apps from supported browsers. Your DLP coverage also can adapt to insider threat ranges, making use of stronger restrictions to customers which might be categorized as ‘elevated risk’ and fewer stringent restrictions for these categorized as ‘low-risk’. For instance, elevated-risk customers are restricted from pasting delicate information into AI purposes, whereas low-risk customers can proceed their productiveness uninterrupted. By leveraging these capabilities, you’ll be able to safeguard your delicate information from potential dangers from utilizing exterior third-party AI purposes. Security admins can then examine these information safety dangers and carry out insider threat investigations inside Purview. These identical information safety dangers are surfaced in Defender XDR for holistic investigations.

This is a fast overview of a few of the capabilities that can assist you safe and govern AI apps that you simply construct on Azure AI Foundry and GitHub, in addition to AI apps that customers in your group use. We hope you discover this handy!
To be taught extra and to get began with securing your AI apps, check out the extra sources beneath:
Learn extra with Microsoft Security
To be taught extra about Microsoft Security options, go to our web site. Bookmark the Security weblog to maintain up with our professional protection on safety issues. Also, observe us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the newest information and updates on cybersecurity.