References
The ethics of non-explainable artificial intelligence: an overview for clinical nurses

Abstract
Artificial intelligence (AI) is transforming healthcare by enhancing clinical decision-making, particularly in nursing, where it supports tasks such as diagnostics, risk assessments, and care planning. However, the integration of non-explainable AI (NXAI) – which operates without fully transparent, interpretable mechanisms – presents ethical challenges related to accountability, autonomy, and trust. While explainable AI (XAI) aligns well with nursing's bioethical principles by fostering transparency and patient trust, NXAI's complexity offers distinct advantages in predictive accuracy and efficiency. This article explores the ethical tensions between XAI and NXAI in nursing, advocating a balanced approach that emphasises outcome validation, shared accountability, and clear communication with patients. By focusing on patient-centred, ethically sound frameworks, it is argued that nurses can integrate NXAI into practice, addressing challenges and preserving core nursing values in a rapidly evolving digital landscape.
Artificial intelligence (AI) is revolutionising healthcare by enabling computer systems to perform tasks traditionally requiring human intelligence (High-Level Expert Group on Artificial Intelligence, 2019). These tasks include data analysis, predictive modelling, diagnostics, and even aspects of decision-making. AI's integration into healthcare, especially in nursing, leverages machine learning and deep learning algorithms that analyse vast datasets to identify patterns, make predictions, and suggest actions, potentially reshaping clinical decision-making and patient care. With the power to improve efficiency, optimise resource allocation, and enhance patient outcomes, AI holds significant promise in nursing, particularly in areas such as risk assessments, diagnostics, and care planning (Ayoub et al, 2023; Ruksakulpiwat et al, 2024).
A pressing ethical concern surrounding AI in health care however, is the ‘explainability’ of AI outputs. In many cases, AI operates as a ‘black box’ due to the opacity of its algorithms, particularly with advanced models such as deep learning. Explainability in AI is not a binary feature, but exists on a spectrum (Freyer et al, 2024). At one end, there are systems with transparent, straightforward algorithms that allow users to fully understand how outputs are derived, akin to conventional medical devices with clear mechanisms. Coeckelbergh (2012) describes this as the ‘functionalist performance criterion’. On the other end, there are highly complex AI systems that resist full explanation due to adaptive, non-linear algorithms that may rely on millions of parameters (Hassija et al, 2023). An example of this type of AI is ChatGPT, the chatbot developed by OpenAI.
Register now to continue reading
Thank you for visiting British Journal of Nursing and reading some of our peer-reviewed resources for nurses. To read more, please register today. You’ll enjoy the following great benefits:
What's included
-
Limited access to clinical or professional articles
-
Unlimited access to the latest news, blogs and video content