In a recent blog post, Microsoft announced new product capabilities designed to enhance the security, safety, and privacy of AI systems, furthering their commitment to Trustworthy AI. These capabilities focus on three key areas: security, safety, and privacy.
For security, Microsoft introduces features like evaluations in Azure AI Studio to support proactive risk assessments and transparency into web queries in Microsoft 365 Copilot, helping admins and users understand how web search enhances Copilot responses.
Regarding safety, new capabilities include a correction capability in Microsoft Azure AI Content Safety's Groundedness detection feature, helping fix hallucination issues in real-time before users see them, along with embedded Content Safety and Protected Material Detection for Code.
For privacy, Microsoft introduces confidential inferencing in preview in their Azure OpenAI Service Whisper model, allowing customers to develop generative AI applications that support verifiable end-to-end privacy.
These new capabilities demonstrate Microsoft's continued dedication to ensuring Trustworthy AI and empowering customers to use and build AI solutions responsibly. By addressing concerns around security, safety, and privacy, Microsoft aims to foster trust in AI systems and unlock their positive potential for organizations and communities worldwide.