Digital Transformation Today

Generative AI for Business: The Microsoft Responsible AI Approach

Generative AI for business is a transformative technology that allows developers to build applications using machine learning models trained on vast data sets to generate business content that often resembles human creation. While powerful, it poses certain risks, emphasizing the need for responsible AI practices. This guide outlines Microsoft’s approach to responsible generative AI, based on the Microsoft Responsible AI standard, and addresses specific considerations for generative models.

Planning a Responsible Generative AI Solution

Microsoft’s guidance offers a four-stage process for responsibly developing AI solutions using generative models:

  1. Identify potential harms: Recognize the risks associated with your solution.
  2. Measure the harms: Assess the extent of these risks in the AI’s output.
  3. Mitigate harms: Implement strategies to reduce the impact of harmful outputs and communicate risks transparently.
  4. Operate responsibly: Maintain a deployment plan that ensures operational readiness and responsible AI practices.

These stages align with the NIST AI Risk Management Framework, providing a structured approach to deploying AI responsibly.

Ready to work toward AI integration? Find out more about the role and impact of generative AI in business.

Identifying Potential Harms of Generative AI for Business

The first step is identifying the risks associated with generative AI, which involves understanding the services and models used. Common risks include:

  • Generating offensive or discriminatory content.
  • Providing incorrect or misleading information.
  • Supporting illegal or unethical actions.

Developers can better document and understand potential harms by consulting resources such as Azure OpenAI Service’s transparency notes or using tools like Microsoft’s Responsible AI Impact Assessment Guide.

  1. Prioritizing harms: Once potential harms are identified, it’s essential to prioritize them based on their likelihood and impact. For example, in a cooking assistant AI, inaccurate cooking times could result in undercooked food, while the AI providing a recipe for harmful substances would be a higher-priority risk due to its more severe implications.
  2. Testing for harms: After prioritization, testing verifies the occurrence and conditions of these risks. A common method is “red team” testing, where teams attempt to expose vulnerabilities. For example, testers may deliberately ask for harmful outputs to gauge the AI’s response. Testing helps refine harm mitigation strategies and uncovers new risks.
  3. Documenting harms: All findings should be documented and shared with stakeholders. This transparency helps ensure ongoing awareness and responsiveness to potential harms, allowing teams to address issues systematically.
  4. Measuring potential harms: Once risks are identified, it’s vital to measure their presence and impact. This includes creating test scenarios likely to elicit harmful outputs and categorizing them based on their severity. These results help track improvements as mitigations are implemented.

Manual vs. Automated Testing

Manual testing is often the first step in evaluating harmful outputs. Once evaluation criteria are established, automated testing can scale this process to handle more test cases efficiently. However, periodic manual testing is necessary to validate new scenarios.

Mitigating Potential Harms

Mitigation strategies are essential and apply across multiple layers of an AI system:

  1. Model layer: Select appropriate models and fine-tune them with specific data to reduce harmful outputs.
  2. Safety system layer: Utilize safety tools like Azure OpenAI’s content filters, which classify content into severity levels, to prevent harmful responses.
  3. Prompt engineering layer: Apply prompt engineering techniques and use retrieval augmented generation (RAG) to provide accurate, contextual responses.
  4. User experience layer: Design user interfaces and documentation to minimize harmful outputs, ensuring transparency about the AI’s limitations.

Operating a Responsible Generative AI Solution

Before releasing an AI solution, compliance reviews in areas like legal, privacy, security, and accessibility are essential. Following this, a phased release plan should allow limited user access to gather feedback, with contingency plans in place for issues that arise post-release.

Key Considerations for Deployment

  • Incident response: Develop a quick-response plan for unexpected events.
  • Rollback plans: Have a plan to revert to a previous version if necessary.
  • Block capabilities: Implement the ability to block harmful responses or users.
  • Feedback channels: Allow users to report harmful or inaccurate outputs.
  • Telemetry monitoring: Use telemetry data to track user satisfaction and identify areas for improvement.

Summary

Responsible generative AI practices, like the Microsoft Responsible AI approach, are crucial to minimizing harm and ensuring user trust.

Following these practical steps ensures a structured method to responsible generative AI for business deployment:

  1. Identify potential harms.
  2. Measure and track these harms in your solution.
  3. Apply layered mitigations at various levels.
  4. Operate responsibly with well-defined deployment strategies.

For comprehensive guidance on responsible AI in generative models, you can check out the Microsoft Azure OpenAI Service documentation or, of course, ask Withum.

Contact Us

Ready to responsibly prepare for responsible AI? Reach out to Withum’s AI Services Team to get it right and get going!