Browse by author
Lookup NU author(s): Eman Alamoudi, Dr Ellis SolaimanORCiD
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
Patients increasingly rely on online reviews when choosing healthcare providers, yet the sheer volume of these reviews can hinder effective decision-making. This paper summarises a mixed-methods study aimed at evaluating a proposed explainable AI system that analyses patient reviews and provides transparent explanations for its outputs. The survey (N=60) indicated broad optimism regarding usefulness (≈82% agreed it saves time; ≈78% that it highlights essentials), alongside strong demand for explainability (≈84% considered it important to understand why a review is classified; ≈82% said explanations would increase trust). Around 45% preferred combined text-and-visual explanations. Thematic analysis of open-ended survey responses revealed core requirements such as accuracy, clarity/simplicity, responsiveness, data credibility, and unbiased processing. In addition, interviews with AI experts provided deeper qualitative insights, highlighting technical considerations and potential challenges for different explanation methods. Drawing on TAM and trust in automation, the findings suggest that high perceived usefulness and transparent explanations promote adoption, whereas complexity and inaccuracy hinder it. This paper contributes actionable design guidance for layered, audience-aware explanations in healthcare review systems.
Author(s): Alamoudi E, Solaiman E
Publication type: Conference Proceedings (inc. Abstract)
Publication status: Published
Conference Name: International Conference on Artificial Intelligence, Computer, Data Sciences and Applications
Year of Conference: 2026
Online publication date: 07/02/2026
Acceptance date: 29/10/2025
Date deposited: 12/02/2026
Publisher: IEEE