References

Ayoub M, Ballout AA, Zayek RA, Ayoub NF Mind + machine: ChatGPT as a basic clinical decision support tool. Cureus. 2023; 18:(8) https://doi.org/10.7759/cureus.43690

Beauchamp T, Childress J Principles of biomedical ethics, 7th edn. : Oxford University Press; 2013

Christen M, Ineichen C, Tanner C How ‘moral’ are the principles of biomedical ethics?—a cross-domain evaluation of the common morality hypothesis. BMC Med Ethics. 2014; 15:(1) https://doi.org/10.1186/1472-6939-15-47

Coeckelbergh M Can we trust robots?. Ethics Inf Technol. 2012; 14:(1)53-60 https://doi.org/10.1007/s10676-011-9279-1

Floridi L, Sanders JW On the morality of artificial agents. Minds Mach. 2004; 14:(3)349-379 https://doi.org/10.1023/B:MIND.0000035461.63578.9d

Freyer N, Groß D, Lipprandt M The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons. BMC Med Ethics. 2024; 25:(1) https://doi.org/10.1186/s12910-024-01103-2

Hassija V, Chamola V, Mahapatra A Interpreting black-box models: a review on explainable artificial intelligence. Cognit Comput. 2023; 16:(1)45-74 https://doi.org/10.1007/s12559-023-10179-8

High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy AI. 2019. https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf (accessed 18 February 2025)

International Council of Nurses. The ICN code of ethics for nurses. 2021. https://www.icn.ch/sites/default/files/2023-06/ICN_Code-of-Ethics_EN_Web.pdf (accessed 18 February 2025)

Moore ZEH, Patton D Risk assessment tools for the prevention of pressure ulcers. Cochrane Libr. 2019; 2019:(1) https://doi.org/10.1002/14651858.CD006471.pub4

The ethics of AI in health care: an updated mapping review. 2024. https://ssrn.com/abstract=4987317 (accessed 18 February 2025)

Ruksakulpiwat S, Thorngthip S, Niyomyart A A systematic review of the application of artificial intelligence in nursing care: where are we, and what's next?. J Multidiscip Healthc. 2024; 17:1603-1616 https://doi.org/10.2147/JMDH.S459946

Terranova C, Cestonaro C, Fava L, Cinquetti A AI and professional liability assessment in healthcare. A revolution in legal medicine?. Front Med (Lausanne). 2024; 10 https://doi.org/10.3389/fmed.2023.1337335

Wynn M The digital dilemma in nursing: a critique of care in the digital age. Br J Nurs. 2024; 33:(11)496-499 https://doi.org/10.12968/bjon.2024.0023

The ethics of non-explainable artificial intelligence: an overview for clinical nurses

06 March 2025
Volume 34 · Issue 5
Doctor talking with patient

Abstract

Artificial intelligence (AI) is transforming healthcare by enhancing clinical decision-making, particularly in nursing, where it supports tasks such as diagnostics, risk assessments, and care planning. However, the integration of non-explainable AI (NXAI) – which operates without fully transparent, interpretable mechanisms – presents ethical challenges related to accountability, autonomy, and trust. While explainable AI (XAI) aligns well with nursing's bioethical principles by fostering transparency and patient trust, NXAI's complexity offers distinct advantages in predictive accuracy and efficiency. This article explores the ethical tensions between XAI and NXAI in nursing, advocating a balanced approach that emphasises outcome validation, shared accountability, and clear communication with patients. By focusing on patient-centred, ethically sound frameworks, it is argued that nurses can integrate NXAI into practice, addressing challenges and preserving core nursing values in a rapidly evolving digital landscape.

Artificial intelligence (AI) is revolutionising healthcare by enabling computer systems to perform tasks traditionally requiring human intelligence (High-Level Expert Group on Artificial Intelligence, 2019). These tasks include data analysis, predictive modelling, diagnostics, and even aspects of decision-making. AI's integration into healthcare, especially in nursing, leverages machine learning and deep learning algorithms that analyse vast datasets to identify patterns, make predictions, and suggest actions, potentially reshaping clinical decision-making and patient care. With the power to improve efficiency, optimise resource allocation, and enhance patient outcomes, AI holds significant promise in nursing, particularly in areas such as risk assessments, diagnostics, and care planning (Ayoub et al, 2023; Ruksakulpiwat et al, 2024).

A pressing ethical concern surrounding AI in health care however, is the ‘explainability’ of AI outputs. In many cases, AI operates as a ‘black box’ due to the opacity of its algorithms, particularly with advanced models such as deep learning. Explainability in AI is not a binary feature, but exists on a spectrum (Freyer et al, 2024). At one end, there are systems with transparent, straightforward algorithms that allow users to fully understand how outputs are derived, akin to conventional medical devices with clear mechanisms. Coeckelbergh (2012) describes this as the ‘functionalist performance criterion’. On the other end, there are highly complex AI systems that resist full explanation due to adaptive, non-linear algorithms that may rely on millions of parameters (Hassija et al, 2023). An example of this type of AI is ChatGPT, the chatbot developed by OpenAI.

Register now to continue reading

Thank you for visiting British Journal of Nursing and reading some of our peer-reviewed resources for nurses. To read more, please register today. You’ll enjoy the following great benefits:

What's included

  • Limited access to clinical or professional articles

  • Unlimited access to the latest news, blogs and video content