Github repository: https://github.com/microsoft/responsible-ai-widgets/
Machine Learning (ML) teams who deploy models in the real world often face the challenges of conducting rigorous performance evaluation and testing for ML models. How often do we read claims such as “Model X is 90% on a given benchmark” and wonder, what does this claim mean for practical usage of the model? In practice, teams are well aware that model accuracy may not be uniform across subgroups of data and that there might exist input conditions for which the model fails more often. Often, such failures may cause direct consequences related to lack…
To help practitioners design better user-facing AI-based systems, Microsoft recently published a set of guidelines for Human-AI Interaction based on decades of research and validated through rigorous user studies across a variety of AI products. The guidelines cover a broad range of interactions: starting from when a user is initially introduced to an AI system and extending to continuous interactions, AI model updates and refinements, as well as managing AI failures.
While the guidelines describe what AI practitioners should create to support effective human-AI interaction, this series of posts explains how to implement the guidelines in AI-based products.
Principal Researcher at Microsoft Research