Azure AI Foundry: Generative Security of AI with Microsoft Security | Microsoft Security Blog

Every week, new generative AI models with a wide range of capacities appear. In this world of rapid innovations, when choosing models for integration into your AI system, it is a sophisticated risk assessment that ensures a balance between the use of new progress and the robust security of holiness. At Microsoft, we focus on making our AI development platform a safe and trustworthy place where you can explore and innovate confidently.

Here we will talk about one key part of it: how we secure the models and the Runtime environment itself. How do we protect against a bad model threatening your AI system, your larger cloud assets or even Microsoft’s own infrastructure?

How Microsoft protects data and software in AI systems

Before we go, let me relax one very common misconception of how data is used in the system. Microsoft yes No Use customer data to train shared models or share your protocols or content with model providers. Our AI products and platforms are part of our standard product range, subject to the same conditions and boundaries of trust you have expected from Microsoft, and your model inputs and outputs are considered to be the customer’s content and processed with the same protection as your AE -mail messages. Our AI platform Offens (Azure AI Foundry and Azure Openai Service) is 100% organized by Microsoft on their servers, without running connections to model providers. We offer some features such as fine -tuning the model that allow you to use your data to create better models for your own use your Models that remain in your holder.

So contact the model security: the first thing you need to remember is that the models are just software, running in virtual Azure (VM) machines and accessed by API; They have any magic powerries to let them out of it, no more than any software that you could run in VM. Azure is already quite defended against software running in VM, which is trying to attack the infrastructure of Microsoft – bad actors are trying to do it every day, they do not need AI and AI Foundry to inherit all these protection. This is the “zero” architecture: Azure Services do not assume that things running on Azure are safe!

Now that is It is possible to hide the malware inside the AI ​​model. This could pose a danger to you in the same way as Malware could in any other open or closed God software. To alleviate this risk, we scan them for our highest visibility models and test before relaxation:

  • Malware analysis: Scanning of AI models for built -in malicious code that could serve as a vector of infection and launchpad for malware.
  • Assessment of vulnerability: Scanning for common injuries and exhibits (CVE) and vulnerability of zero focused on AI models.
  • Rear door detection: Scans Model Function for proof of attacks on the supplier chain and rear pattern, such as any code and network calls.
  • The integrity of the model: Analyzes layers, components and tensor of the AI ​​model to detect handling or corruption.

You can find that the models were scanned by indication on their model card – no customer action is required to get this advantage. For especially high visibility models, such as Deepseek R1, we even go Funter and let teams of experts tear the software-from source code, while the red teams are probeing the system contradictory, etc. This higher level of scanning does not have (yet) an explicit model card, but due to its public visibility we wanted to scan before we had the user interface elements.

Defending and rule of AI models

Races, as security experts, are likely to realize that no scan can detect all harmful actions. This is the same problem that organization faces any other third -party software, and the organization should add it in the usual way: confidence in this software should come partially from trusted intermediaries such as Microsoft, but above all it should be rooted in the organization.

For those who want a safer experience as soon as you choose and deploy a model, you can defense and drive a whole set of Microsoft security products. You can read more about how to do this here: ensuring Deepseek and other AI systems with Microsoft Security.

And races, because the quality and behavior of each model is different, you should evaluate any model not only for security, but for where your specific case is suitable, by testing as part of the complete system. This is part of the broader consent to how to secure AI systems to which we return in the upcoming blog.

Using Microsoft security to ensure AI and customer data

In short, the key points of our approach to ensuring models on the Azure AI founders are:

  1. Microsoft carries out a number of security investigations for key AI models than hosting them in the Azure AI Foundry catalog, and continues to monitor changes that can affect the credibility of each model for our customers. You can use information on the model card in any confidence (or their lack) in any builder giving models to assess your position to any model as you want for any third -party software library.
  1. All Azure models are isolated at the customer’s tenant. There is no access to a model provider or from a model provider, including close partners such as OpenI.
  1. Customer data is not used for model training, nor is it available outside Azure tenants (unless the customer proposes their system).

More information with Microsoft Security

If you want to learn more about Microsoft security solutions, visit our website. Save the security blog tab to keep up with our professional coverage in the area of ​​security matters. Also watch us on LinkedIn (Microsoft Security) and X (@Msftsecurity) For the latest messages and updates on cyber security.

Leave a Comment