1. ギャル的キラキラポイント✨
2. 詳細解説
背景
続きは「らくらく論文」アプリで
This paper reports a case study on how explainability requirements were elicited during the development of an AI system for predicting cerebral palsy (CP) risk in infants. Over 18 months, we followed a development team and hospital clinicians as they sought to design explanations that would make the AI system trustworthy. Contrary to the assumption that users need detailed explanations of the inner workings of AI systems, our findings show that clinicians trusted it when it enabled them to evaluate predictions against their own assessments. Our findings show how a simple prediction graph proved effective by supporting clinicians' existing decision-making practices. Drawing on concepts from both Requirements Engineering and Explainable AI, we use the theoretical lens of Evaluative AI to introduce the notion of Evaluative Requirements: system requirements that allow users to scrutinize AI outputs and compare them with their own assessments. Our study demonstrates that such requirements are best discovered through the well-known methods of iterative prototyping and observation, making them essential for building trustworthy AI systems in expert domains.