超要約: AI健康アドバイスの信頼度を測る新しいモノサシ、TAIGHA登場!IT企業も要チェック👀
✨ ギャル的キラキラポイント ✨ ● AIヘルスアドバイスへの「信頼」と「不信」を両方測れるのがスゴくない?✨ ● 短縮版TAIGHA-Sもあるから、手軽に試せるのが嬉しいよね🎵 ● IT企業がAIヘルスケアで成功するための、使える武器になるってこと!😎
詳細解説いくよ~! ● 背景 AIが健康アドバイスくれる時代になったけど、そのアドバイスって信用できる?🤔 今までの信頼度を測るモノサシじゃ、AIヘルスケアには合わなかったの! ● 方法 そこで登場!AIヘルスケアに特化した信頼度を測る新しいモノサシTAIGHA💖 信頼と不信を両方測れるようにしたんだって! ● 結果 TAIGHAを使うと、AIヘルスケアの安全性がどれくらいか、わかるようになるの! それが、ヘルスケアサービスの質を上げることに繋がる✨ ● 意義(ここがヤバい♡ポイント) IT企業はTAIGHAを使って、AIヘルスケアサービスの信頼性を高められる! ユーザーが安心して使えるサービスを作れば、ビジネスチャンスも広がるってこと😉
リアルでの使いみち💡 ● AI症状チェッカーアプリで、TAIGHAを使って信頼度をチェック! 改善点を見つけて、もっと使いやすく💖 ● 健康アドバイスアプリで、TAIGHAの結果を参考に、ユーザーに合ったアドバイスを✨
続きは「らくらく論文」アプリで
Artificial Intelligence tools such as large language models are increasingly used by the public to obtain health information and guidance. In health-related contexts, following or rejecting AI-generated advice can have direct clinical implications. Existing instruments like the Trust in Automated Systems Survey assess trustworthiness of generic technology, and no validated instrument measures users' trust in AI-generated health advice specifically. This study developed and validated the Trust in AI-Generated Health Advice (TAIGHA) scale and its four-item short form (TAIGHA-S) as theory-based instruments measuring trust and distrust, each with cognitive and affective components. The items were developed using a generative AI approach, followed by content validation with 10 domain experts, face validation with 30 lay participants, and psychometric validation with 385 UK participants who received AI-generated advice in a symptom-assessment scenario. After automated item reduction, 28 items were retained and reduced to 10 based on expert ratings. TAIGHA showed excellent content validity (S-CVI/Ave=0.99) and CFA confirmed a two-factor model with excellent fit (CFI=0.98, TLI=0.98, RMSEA=0.07, SRMR=0.03). Internal consistency was high ({\alpha}=0.95). Convergent validity was supported by correlations with the Trust in Automated Systems Survey (r=0.67/-0.66) and users' reliance on the AI's advice (r=0.37 for trust), while divergent validity was supported by low correlations with reading flow and mental load (all |r|<0.25). TAIGHA-S correlated highly with the full scale (r=0.96) and showed good reliability ({\alpha}=0.88). TAIGHA and TAIGHA-S are validated instruments for assessing user trust and distrust in AI-generated health advice. Reporting trust and distrust separately permits a more complete evaluation of AI interventions, and the short scale is well-suited for time-constrained settings.