In order to bring you the best possible user experience, this site uses Javascript. If you are seeing this message, it is likely that the Javascript option in your browser is disabled. For optimal viewing of this site, please ensure that Javascript is enabled for your browser.
Did you know that your browser is out of date? To get the best experience using our website we recommend that you upgrade to a newer version. Learn more.

Ethics of digital health tools

New technologies promise to improve healthcare by empowering the individual and offering new approaches to cope with the limitations of current practice. However, this generates specific and novel ethical issues. The ethical implications relevant to data quality, patient-physician relationship, and the equity of access to healthcare services will be discussed, as well as potential questions arising from the utilisation of artificial intelligence (AI) in the healthcare settings. The importance of pre-evaluating ethical implications before implementation of new digital solutions into clinical practice is highlighted, as well as the need to update the educational pathways for both citizens and healthcare professionals.

e-Cardiology and Digital Health


Introduction

e-Health has been defined as the use of information and communications technology (ICT) in support of health and health-related fields [1]. Recently, the term digital health has been introduced as a more general definition encompassing e-Health as well as emerging areas such as the use of advanced computing sciences in “big data”, genomics and artificial intelligence (AI), to support and deliver health care and to improve the health and wellbeing of people [2].

These technologies are considered promising, as they would allow more personalised approaches to care, empowering patients, improved patient safety, better communication between care providers and patients, increased access to health information, better chronic disease management and prevention, improved efficiency of the healthcare system, improving access to scarce specialist skills and reducing patient referral, tele-education in the case of shortages of expertise, connecting patients even in remote locations, and in the management of the large amount of health data nowadays available [3].

While the potential of digital health solutions is wide-ranging, their design, development, deployment, and utilisation rely on business models and use of algorithms and data that need to be considered from an ethical point of view.

Generally, in medicine and nursing the basic principles of ethics concern respect for human life, respect for human dignity, autonomy, care, justice, and maximising beneficence. Digital health solutions are based on different finalities, thus extending existing ethical implications beyond what was accepted up till now [4].

In addition, the availability of new technology leads to the consideration of issues other than the conventional ones (i.e., privacy, confidentiality, and informed consent) that are specific and novel. In Table 1, a selection of publications that focus on the ethics of digital health tools in the last three years is reported chronologically. From this it is possible to appreciate the evolution of the principles that have been addressed and that have been considered a priority over time.

 

Table 1. Evolution over time in selected literature of the main ethical aspects concerning digital health.

Authors

Ethical aspects

Kleinpeter E. 2017 [13]

 

Data protection, equality of service availability, medicine from the art of curing to the science of measurement, changes in patient-physician relationship.

Terrasse M et al. 2019 [25]

Impact of social networking sites on the doctor-patient relationship, the development of e-health platforms to deliver care, the use of online data and algorithms to inform health research, and the broader public health consequences of widespread social media use.

Boers SN et al. 2020 [26]

 

Dealing with predictive and diagnostic uncertainty, roles and responsibilities of patients, role and responsibilities of physicians, patient-physician relationship.

Burr C et al. 2020 [27]

Privacy, autonomy, accountability, intelligibility, accessibility.

Fenech and Buston. 2020 [28]

Relationship, data, transparency and explainability, health inequalities, errors and liability, meeting public needs, regulation, consequences of novel insights, trusting algorithms, collaborations between public and private sector organisations, developing principles and translating them into policy.

 

Among them, in this paper, three main ethical implications relevant to digital health will be discussed, as well as the potential questions arising from utilisation of AI in the healthcare setting.

Data quality

With the widespread availability of novel mobile medical technologies for ubiquitous monitoring, directly controlled and operated by the patient, often providing an onboard interpretation of the measured signals, in order to overcome the difficulties in integrating these systems in the clinical workflow [5], it becomes of paramount importance: 1) to know the exact claims, limits of validity, and reported accuracy for which the medical device or software has been approved; 2) to be sure that the device has been used as recommended, so that the patient-collected data can be considered as reliable; and 3) to have knowledge of possible updates in software that could have changed the performance of the device.

The first point recalls on the one hand the need for more transparency in the medical device certification process [6], and on the other hand the need for clinicians to be informed about performance and limits of usage of possible devices they could recommend to patients. As an example, the Apple smartwatch software that uses an optical sensor to detect the pulse wave and look for beat‑to‑beat variability at rest to detect possible atrial fibrillation (AF) is not designed for individuals already diagnosed with AF [7]. Also, the ECG app in the same device, that uses the electrical heart sensor to record a 1-lead ECG and then provides a result (i.e., sinus rhythm, AF, or inconclusive), is not recommended for users with other known arrhythmias and is not designed for individuals diagnosed with AF, or to detect AF for HR >120 bpm [8].

The second point implies that the patient needs to have sufficient digital skills and health literacy (i.e., the ability to seek, find, understand, and appraise health information from electronic sources and apply the knowledge gained to addressing or solving a health problem [9]) in order to be able to properly use the device.

The third point refers to the fact that digital health solutions based on smartphone applications and use of embedded or connected sensors are prone to being modified via internet connection, to provide patches to possible bugs, to update to new operative systems, and to expand the features provided. In this process, where automated updating depends on the smartphone user’s setting, algorithm performances can be profoundly changed, thus requiring the need to verify which software version has been utilised to automatically interpret the data [10,11].

Changes in patient-physician relationship

While in the past the healthcare professional was the natural intermediary in guiding patient access to health information, nowadays this role is functionally by-passed by new apomediaries (i.e., the Web, on-line groups, …) in a process of progressive disintermediation [12], with the ethical concern that incorrect ideas or potentially dangerous practices could take hold. In addition, the development of digital health poses the risk of shifting medicine from the “art of curing” to a “science of measurement”, where the inner life and feelings of the patient would be forgotten, and the communicative dimension of the medical act would be secondary to its informational dimension [13].

Nevertheless, to guide the patient on his/her digital health journey, there is a need for new professional learning paths for physicians to use new tools properly and to correctly interpret their results, as well as how to integrate them into medical practice ethically and deontologically.

The wide patient access to medical devices (other than the conventional thermometer and blood pressure monitoring device usually in everybody’s home), as well as lifestyle wearables, will lead to a situation in which patients self-control their health, confirm or disconfirm symptoms, activate a physician consultation or take immediate action.

This new scenario of shifting tasks and responsibilities could affect the roles that doctors have in healthcare, on the one hand stimulating new interactions between care providers and patients to understand health and its correlation with lifestyle better, and on the other hand creating in the patient a sense of independence, self-judgement and confrontation. This raises questions on whether valuable features of the traditional patient-physician relationship would become lost, if medical training would need to be rethought, or patient expectations redirected [14].

Equity of access to healthcare services

Health equity has been defined as the absence of discrimination or unfair health disparities, to be achieved by minimising such health disparities among groups of people who have different levels of underlying social advantage [15]. In this context, digital health is considered as having the potential to resolve health inequalities. However, no availability of computer technology or limited access to the Internet, lack of the required skills and physical access barriers (which mainly affect low-income classes, the elderly, and people with disabilities) could represent a limiting factor for the accessibility of these new services to those categories that are expected to receive more benefit from it, thus exacerbating disparities in healthcare qualities and outcomes, so reinforcing what has been described as the “digital divide” [3,16].

In particular, the importance of health literacy (i.e., the degree to which an individual can access, process, and comprehend basic health information and services in order to inform and participate in health decisions) to cardiovascular disease management, prevention and public health has been addressed recently and underlined in a scientific statement from the American Heart Association [17].

Also, it has been reported that individuals with limited health literacy may have less access to reliable Internet-based health education materials [18]. While the use of digital health solutions may represent an attractive option for patient-oriented education, text messages, and social networking to help with chronic disease management, a mobile health intervention study showed that racial or ethnic minorities, older adults, and those with limited health literacy were the least engaged with text messaging and automated calls [19].  

Data enabling health (artificial intelligence)

AI represents a multidisciplinary area, including the fields of machine learning, natural language processing, and robotics, describing a range of techniques that allow computers to perform tasks that would usually require human reasoning and problem-solving skills. AI started more than 60 years ago in the field of computer science, but now, thanks to technological developments in both the field of artificial neural networks and powerful graphics processing units (GPU), it seems to have reached an adequate level of development to be applied in any field of medicine. Thanks to its ability to extract knowledge and learn from large sets of clinical data, AI can play a role in diagnostics, decision making and personalised medicine. At the same time, it creates a novel set of ethical challenges that need to be identified and mitigated [20].

On 10 April 2018, twenty-five European countries signed a Declaration of Cooperation on Artificial Intelligence (AI), with the goals of increasing public and private investment in AI, preparing for socio-economic changes, and ensuring an appropriate ethical and legal framework. Concerning this last aim, in June 2018 the Commission appointed 52 experts to a High-Level Expert Group on Artificial Intelligence (AI HLEG), including representatives from academia, civil society, as well as industry (but no medical associations, physicians, or patients were involved).

After publication of a first draft in December 2018 and the review of 506 comments received, in April 2019 the final voluntary and non-binding document on Ethics Guidelines for Trustworthy AI was published [21]. In it, seven key requirements (i.e., human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) and four interrelated ethical principles (respect for human autonomy; prevention of harm; fairness; and explicability) were generally addressed. In Table 2, examples of questions arising, contextualising such principles into the healthcare domain, are given.

 

Table 2. Examples of questions arising contextualising the ethical principles addressed in the Ethics Guidelines for Trustworthy AI [23] into the healthcare domain.

Ethical principle

Contextualised questions in the healthcare domain

Respect of human autonomy

  • How are the psychological mechanisms in the healthcare provider and patient relationship affected when exposed to utilisation of AI (i.e., for computer‐aided decision support system)?
  • How will the process of dealing with uncertainty in the medical action be changed by using AI?
  • AI to manage simple cases, with physicians dealing with “complex case” scenarios: how this could impact on the learning process and data interpretation in new physicians?
  • In teleconsultation settings, how could interaction with a real physician/nurse or with an algorithm change the outcome?
  • How will freedom of choice to have/not to have AI involved in one’s own care path be guaranteed?

Prevention of harm

  • Negative interpretations could cause larger problems than false positive results, but false positives could increase medicalisation: how can this scenario be dealt with?
  • In case medical guidelines are modified, is there a need to update and re‐train the AI?
  • Is the use of synthetic cases to train AI on a variety of patient variables and conditions that might not be present in random patient samples, but are important to treatment recommendations, fair and safe enough?
  • If it is not possible to evaluate the validity and fairness of algorithm decisions, how could it be possible to use such decisions responsibly?

Fairness

  • How can one recognise emphatic marketing hype from verifiable and tested claims?
  • In case of AI-related medical errors, who is responsible? The computer programmer, the tech company, the regulator or the clinician?
  • How does one contest and seek effective redress against decisions made by AI systems and by the humans operating them (i.e., inclusion in life‐saving novel treatment trials)?
  • How does one define the entity accountable for the decision?
  • How does one prevent abuses that could generate disparity of treatment or discrimination (i.e., not employing a person due to his/her health background check, or higher prices for insurance) due to AI capabilities of identification without consent?

Explicability

  • AI results are strongly dependent on the training data set (composition, geographical origin, local gold standard), with possible classification bias (“racial AI”). How can this risk be minimised?
  • Transparency of AI processes that lead to a specific diagnosis or suggest treatment (why that choice is recommended for that particular patient) is needed to gain trust. How can this be reconciled with «black‐box» AI methods (i.e., deep learning) where the decision process is not visible or explainable?
  • What is the uncertainty surrounding a specific AI medical diagnosis?
  • How does one deal with AI model updates based on the potential continuous evolution of the training database?
  • How does one measure and compare performance of algorithms in a consistent and independent way?

 

At the basis of the development of AI in healthcare, there are questions around its regulatory environment. Currently, the concept of software to be used for medical purposes, such as prediction and prognosis, has been added to the medical device definition of the Medical Device Regulation [22], thus in principle covering AI under this regulatory umbrella. However, no specific characteristics of AI have been addressed in such regulations or related guidance documents.

More recently, the European Commission published a “White Paper On Artificial Intelligence - A European approach to excellence and trust” [23] to set out policy options on how to achieve promoting the uptake of AI and how to address the risks associated with certain uses of this new technology, strongly supporting a human-centric approach.

Considering the lack of current requirements regarding transparency, traceability and human oversight, the creation of a clear European regulatory framework for trustworthy AI, to be applied on related products and services, is keenly anticipated. Results of these regulatory efforts will be visible in the near future and will allow a better evaluation of how to integrate AI properly as a digital health tool in healthcare practice, hopefully overcoming the limitations of current implementations [24].

Conclusions

Technological progress is part of our society, and it is only natural that it impacts the domain of healthcare domain. To properly cope with new digital health tools, it is important to pre-evaluate their ethical implications before their implementation into clinical practice. In this way, the regulatory framework, as well as the educational pathway for both citizens and healthcare professionals, can be updated in order to guide the process instead of being affected first, and only, after the fact, searching for a remedy.

References


  1. Global diffusion of eHealth: Making universal health coverage achievable. Report of the third global survey on eHealth. Geneva: World Health Organization; 2016. 
  2. Agenda item 12.4. Digital health resolution. In: Seventy-first World Health Assembly, Geneva, 26 May 2018. Geneva: World Health Organization; 2018. 
  3. Moghaddasi H, Amanzadeh M, Rahimi F, Hamedan M. eHealth equity: current perspectives. J Int Soc Telemed eHealth. 2017;5:e9. 
  4. Moerenhout T, Devisch I, Cornelis GC. E-health beyond technology: analyzing the paradigm shift that lies beneath. Med Health Care and Philos. 2018;21:31-41. 
  5. Frederix I, Caiani EG, Dendale P, Anker S, Bax J, Böhm A, Cowie M, Crawford J, de Groot N, Dilaveris P, Hansen T, Koehler F, Krstačić G, Lambrinou E, Lancellotti P, Meier P, Neubeck L, Parati G, Piotrowicz E, Tubaro M, van der Velde E. ESC e-Cardiology Working Group Position Paper: Overcoming challenges in digital health implementation in cardiovascular medicine. Eur J Prev Cardiol. 2019;26:1166-77. 
  6. Fraser AG, Butchart EG, Szymański P, Caiani EG, Crosby S, Kearney P, Van de Werf F. The need for transparency of clinical evidence for medical devices in Europe. Lancet. 2018;392:521-530. 
  7. De novo classification request for irregular rhythm notification feature. DEN180042. 
  8. De novo classification request for ECG app. DEN180044.  
  9. Norman CD, Skinner HA. eHealth Literacy: Essential Skills for Consumer Health in a Networked World. J Med Internet Res. 2006;8:e9. 
  10. Freedman B. Screening for Atrial Fibrillation Using a Smartphone: Is There an App for That? J Am Heart Assoc. 2016;5:e004000. 
  11. Albert DE. Performance of hand-held electrocardiogram devices to detect atrial fibrillation in a cardiology and geriatric ward setting. Europace. 2017;19:1408. 
  12. Eysenbach G. Medicine 2.0: social networking, collaboration, participation, apomediation, and openness. J Med Internet Res. 2008;10:e22. 
  13. Kleinpeter E. Four Ethical Issues of “E-Health”. IRBM. 2017;38:245-9. 
  14. Lucivero F, Jongsma KR. A mobile revolution for healthcare? Setting the agenda for bioethics. J Med Ethics. 2018;44:685-9. 
  15. Braveman P, Gruskin S. Defining equity in health. J Epidemiol Community Health.  2003;57:254-8. 
  16. Levy H, Janke AT, Langa KM. Health literacy and the digital divide among older Americans. J Gen Intern Med. 2015;30:284-9. 
  17. Magnani JW, Mujahid MS, Aronow HD, Cené CW, Dickson VV, Havranek E, Morgenstern LB, Paasche-Orlow MK, Pollak A, Willey JZ; American Heart Association Council on Epidemiology and Prevention; Council on Cardiovascular Disease in the Young; Council on Cardiovascular and Stroke Nursing; Council on Peripheral Vascular Disease; Council on Quality of Care and Outcomes Research; and Stroke Council. Health literacy and cardiovascular disease: fundamental relevance to primary and secondary prevention: a scientific statement from the American Heart Association. Circulation. 2018;138:e48-e74. 
  18. Meppelink CS, Smit EG, Diviani N, Van Weert JC. Health literacy and online health information processing: unraveling the underlying mechanisms. J Health Commun. 2016;21(suppl 2):109-20. 
  19. Nelson LA, Mulvaney SA, Gebretsadik T, Ho YX, Johnson KB, Osborn CY. Disparities in the use of a mHealth medication adherence promotion intervention for low-income adults with type 2 diabetes. J Am Med Inform Assoc. 2016;23:12-8.
  20. Rigby MJ. Ethical dimensions of using Artificial Intelligence in health care. AMA J Ethics. 2019;21:E121-4. 
  21. Shaping Europe’s digital future. Report/Study: Ethics guidelines for trustworthy AI. European Commission. 8 April 2019. 
  22. European Union. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC. 
  23. White Paper on Artificial Intelligence - A European approach to excellence and trust. European Commission. Brussels, 19.2.2020 COM(2020) 65 final. 
  24. Spiegelhalter D. Should We Trust Algorithms? Harvard Data Science Review. 2020 Jan 31;2(1). 
  25. Terrasse M, Gorin M, Sisti D. Social Media, E-Health, and Medical Ethics. Hastings Cent Rep. 2019;49:24-33. 
  26. Boers SN, Jongsma KR, Lucivero F, Aardoom J, Büchner FL, de Vries M, Honkoop P, Houwink EJF, Kasteleyn MJ, Meijer E, Pinnock H, Teichert M, van der Boog P, van Luenen S, van der Kleij RMJJ, Chavannes NH. SERIES: eHealth in primary care. Part 2: Exploring the ethical implications of its application in primary care practice. Eur J Gen Pract. 2020;26:26-32. 
  27. Burr C, Taddeo M, Floridi L. The Ethics of Digital Well-Being: A Thematic Review. Sci Eng Ethics. 2020 Jan 13. 
  28. Fenech ME, Buston O. AI in cardiac imaging: a UK-based perspective on addressing the ethical, social, and political challenges. Front Cardiovasc Med. 2020;7:54. 

Notes to editor


Author:

Enrico G. Caiani, MS, PhD, FESC

Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria, Milan, Italy
Institute of Electronics, and Information and Telecommunication Engineering, Consiglio Nazionale delle Ricerche, Milan, Italy

 

Address for correspondence:

Prof. Enrico G. Caiani, Piazza L. da Vinci 32, 20133 Milano (MI), Italy
E-mail: enrico.caiani@polimi.it

 

Author disclosures:

Professor Caiani has received honoraria as speaker/consultant from Medtronic International Trading Sarl, Merck & Co., Inc, and Novartis.

 

The content of this article reflects the personal opinion of the author/s and is not necessarily the official position of the European Society of Cardiology.