References

Locked out: Digitally excluded people's experiences of remote GP appointments. 2021. https://tinyurl.com/237ehecc

NHS litigation reform. 2022. https://tinyurl.com/5d583fbj

Government ‘looking carefully’ at radical clin neg proposals. 2022. https://tinyurl.com/yckwwzms

The impact of artificial intelligence on the doctor-patient relationship. 2021. https://tinyurl.com/dmp24w92

Safer care for all: solutions from professional regulation and beyond. 2022. https://tinyurl.com/4kzu2jx9

Ethics and governance of artificial intelligence for health: WHO guidance. 2021. https://tinyurl.com/2s4ecym2

Implications of the COVID-19 pandemic for patient safety: a rapid review. 2022. https://tinyurl.com/4nhpmj33

New artificial intelligence can diagnose pneumonia by listening to someone cough. 2022. https://tinyurl.com/ysdpyvry

Pressing issues in healthcare digital technologies and AI

26 January 2023
Volume 32 · Issue 2

Abstract

John Tingle Lecturer in Law, Birmingham Law School, University of Birmingham, discusses several reports addressing patient safety, ethical and legal issues in healthcare digital technologies and artificial intelligence

As we move into 2023 it is a useful exercise to engage in some horizon scanning and to think about issues that might lie ahead. In the context of the subjects that I research, patient safety and the law, there is no shortage of future-facing concerns. This year we expect a government consultation paper on reform in NHS clinical negligence litigation (Hyde, 2022) and we are still digesting the House of Commons Health and Social Care Committee (2022) inquiry into the same topic.

At the same time we can reflect on the changes brought to NHS care delivery by COVID-19 and how these are likely to continue. I am thinking here particularly about the innovative changes and developments in patient—GP consultations via digital conferencing applications and more remote consultations. The pandemic brought both positive and negative changes to aspects of healthcare delivery and treatment. Some newspaper headlines even herald a new age of health digital technological advances. Writing in the Independent,Wright (2022) discussed new artificial intelligence (AI) that can diagnose pneumonia by listening to someone cough:

‘If it is rolled out widely, scientists say people will be able to diagnose themselves with the illness without going to the doctor and costs for the NHS should fall.’

Virtual health care

The Professional Standards Authority (PSA) (2022) discussed regulating virtual health care and new technologies. One example is how AI can assist in several areas such as complex surgery or cancer diagnosis, with issues around data being analysed by algorithms, machine diagnoses and the implications. It raised concern that the rise of so-called ‘virtual care’ could bring problems:

‘Evidence suggests that online healthcare businesses are underperforming against their “physical” competitors in terms of quality of care … and sometimes engage in risky practices … Similarly, new technology such as robotic surgery and AI has huge potential but also carries significant risk. Technological failure or AI running on biased or inaccurate data put patients at tangible risk and may exacerbate existing inequalities but lines of accountability are unclear.’

PSA, 2022:37

All these issues taken together raise important patient safety, ethical and legal issues, which need to be considered carefully when healthcare digital technology and AI policies and practices are discussed and developed. There are many helpful reports on these issues in a growing field of literature and some key ones will be discussed here.

Patient safety implications of COVID-19

One positive aspect of the COVID-19 pandemic was that it forced some innovations in healthcare treatment and changed some attitudes. From a patient safety perspective, it could be argued that the urgent need for adaptability, flexibility and workarounds brought about by the pandemic has led to a more risk-tolerant healthcare environment.

The World Health Organization (WHO) (2022) explored the issue of patient safety implications, looking at such topics as transformative changes, safety risks and harm implications. There is a discussion of how the pandemic accelerated new digital transformations and innovations, such as new care pathways, diagnostic tools strategies and novel at-home diagnostic tools.

‘There was rapid implementation of digital consultations via telemedicine, which had previously stalled’

WHO, 2022:7

The conclusion of the report includes a call to build on the progress made:

‘The world has never been as united to fight a common enemy … Now is an opportunity to build on several advances, such as the development and implementation of care pathways and guidelines, digital innovations, increasing transparency, open and frequent bidirectional communication, data sharing, collaboration and teamwork with the breakdown of traditional silos, and the rapid adoption of selected patient safety practices. These advances led to short-term benefits that now need to be sustained.’

WHO, 2022:29

This is a clarion call for our global healthcare systems to embrace the good that arguably came out of the pandemic. However, we cannot have a one-size-fits-all approach to digital care technologies and AI. Not all patients are singing from the same song sheet and virtual appointments at the GP surgery may not suit everyone.

Some groups of patients can fall by the wayside. Groups such as the elderly, those with learning difficulties, those with mental health issues, or patients with poor digital literacy can find a move away from direct personal healthcare contact difficult, stressed Healthwatch (2021). It cast an important light on how some sectors of the population feel about digital exclusion, with particular focus on older people and those with limited English or disabilities:

‘We found that people can be digitally excluded for various reasons including digital skill level, affordability of technology, disabilities, or language barriers. Participants often mentioned that they weren't interested in accessing healthcare remotely, even if they could.’

Healthwatch, 2021:2

The report set out five principles for post-COVID digital health care (Healthwatch, 2021:22-24):

  • Maintain traditional models of care alongside remote methods and support patients to choose the most appropriate appointment type to meet their needs
  • Invest in support programmes to give as many people as possible the skills to access remote care
  • Clarify patients’ rights regarding remote care, ensuring that people with support or access needs are not disadvantaged when accessing care remotely
  • Enable practices to be proactive about inclusion by recording people's support needs
  • Commit to digital inclusion by treating the internet as a universal right.

These represent commonsense, practical ways of including people who may not wish or be able to access new methods of care.

Balancing the NHS budget

There will always be a conflict between accommodating individual patient preferences and those of the wider NHS. There is an infinite demand for finite resources with a growing ageing population presenting with comorbidities. Arguably, given the present model of the NHS, it will always be short of financial resources and these must be carefully managed.

Principle 5 in the Healthwatch (2021) report is an interesting one in the context of digital health care, cost and access, where there is discussion of access to the internet being a ‘legal right’:

‘We agree that the national ambition to provide digital-first primary care to everyone should be underpinned by a universal right to internet access, ensuring the NHS remains genuinely free at the point of use.’

Healthwatch, 2021:24

The issue of free internet for everybody brings a financial cost, which must be borne by somebody. This cost must also be balanced against other expenditure priorities and choices must be made, transparently and justly.

AI and ethics

WHO has produced some guiding principles for AI in health care, both design and use:

‘This report endorses a set of key ethical principles. WHO hopes that these principles will be used as a basis for governments, technology developers, companies, civil society, and intergovernmental organizations to adopt ethical approaches to appropriate use of AI for health.’

WHO, 2021:xii

The principles are discussed under the following headings in the report:

  • Protecting human autonomy.
  • Promoting human wellbeing and safety and the public interest
  • Ensuring transparency, explainability and intelligibility
  • Fostering responsibility and accountability
  • Ensuring inclusiveness and equity
  • Promoting AI that is responsive and sustainable.

We can see the development of an international jurisprudence for digital technologies and AI in health, which can inform thinking and policy development in the area. Another developing influence on governance and regulation in this area is the work being conducted by the Council of Europe (https://www.coe.int/en/web/artificial-intelligence). A council steering committee commissioned a report by Mittelstadt (2021), which raises some key issues in relation to the impact of AI on the doctor-patient relationship. It also presents recommendations for common ethical standards for trustworthy AI.

In the concluding remarks, the author fully captures the issues at stake here:

‘The doctor-patient relationship is a keystone of ‘good’ medical practice, and yet it is seemingly being transformed into a doctor-patient-AI relationship. The challenge facing AI providers, regulators, and policymakers is to set robust standards and requirements for this new type of healing relationship to ensure patients’ interests and the moral integrity of medicine as a profession are not fundamentally damaged by the introduction of disruptive emerging technologies.

Mittelstadt, 2021:65

Conclusion

This article has hopefully provided a snapshot of some key developing patient safety, legal and ethical issues in health care with a particular focus on the future role of digital technologies and AI. In order for these new ways of treating and caring for patients to take root they must generate public and professional trust and confidence. Policymakers allocating resources must recognise that a one-size-fits-all approach will not work.

There must also be some public acceptance of the fact that the NHS cannot satisfy everybody's needs and demands - that sometimes hard resource allocation decisions must be made. The NHS does face and most probably always will face, on its present model, an insatiable demand for its scarce resources. We should not view new technologies in health care as the panacea to save costs—a much more holistic approach is needed.