Home
International Journal of Science and Research Archive
International, Peer reviewed, Open access Journal ISSN Approved Journal No. 2582-8185

Main navigation

  • Home
  • Past Issues

Real-time clinical decision support with explainable AI: Balancing accuracy and interpretability in high-stakes environments

Breadcrumb

  • Home
  • Real-time clinical decision support with explainable AI: Balancing accuracy and interpretability in high-stakes environments

Muhammad Faheem 1, * and Aqib Iqbal 2

1 Department of Information Technology Management, Cumberland University, Lebanon Tennessee United States.

2 Department of Project Management, University of Law, Birmingham, United Kingdom.

Review Article

International Journal of Science and Research Archive, 2025, 16(01), 1204-1220

Article DOI: 10.30574/ijsra.2025.16.1.1992

DOI url: https://doi.org/10.30574/ijsra.2025.16.1.1992

Received on 22 May 2025; revised on 05 July 2025; accepted on 08 July 2025

Artificial intelligence (AI) is rapidly changing the face of healthcare by reshaping the way healthcare decisions are made because it provides a new Era of Data-driven healthcare decision making, especially in high-reputation companies that require very quick and trustworthy decisions. There are also, however, serious obstacles to clinical adoption of many AI models, because they are opaque, and this requires the use of explainable AI (XAI) methods. In this paper, the architecture, challenges, and applications of real-time clinical decision support systems (RT-CDSS) augmented with XAI are considered. Based on the case studies of imaging analytics, dementia prediction, and pharmacovigilance, the study examines the effects of explainability on trust, safety, as well as system usability. Major concerns covered are whether there is a trade-off between model performance and interpretability, what technical and organizational obstacles to deployment there are, and what the ethical and regulatory environment suggests regarding making AI interpretable in the clinical context. Patient-centered outcomes are also assessed along with process evaluation measures, including SHAP, LIME, and time concerning trust in clinicians. Lastly, the paper explains the emerging trends such as human-in-the-loop structures, federated learning, and consortia functions, and presents a roadmap to realize the development of RT-CDSS not just accurate but also accountable, intelligible, and ethically compliant. These results highlight the need to develop AI technology that could be easily incorporated into clinical practice to promote transparent, semitransparent, and safe medical decisions.

Explainable Artificial Intelligence (XAI); Real-Time Clinical Decision Support Systems (RT-CDSS); Medical AI Interpretability; Trust in Healthcare AI; Human-In-The-Loop Decision Support

https://journalijsra.com/sites/default/files/fulltext_pdf/IJSRA-2025-1992.pdf

Preview Article PDF

Muhammad Faheem and Aqib Iqbal. Real-time clinical decision support with explainable AI: Balancing accuracy and interpretability in high-stakes environments . International Journal of Science and Research Archive, 2025, 16(01), 1204-1220. Article DOI: https://doi.org/10.30574/ijsra.2025.16.1.1992.

Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0

Footer menu

  • Contact

Copyright © 2026 International Journal of Science and Research Archive - All rights reserved

Developed & Designed by VS Infosolution