Read your latest personalised notifications
No account yet? Start here
Don't miss out
Ok, got it
Prof. Enrico Caiani ,
New technologies promise to improve healthcare by empowering the individual and offering new approaches to cope with the limitations of current practice. However, this generates specific and novel ethical issues. The ethical implications relevant to data quality, patient-physician relationship, and the equity of access to healthcare services will be discussed, as well as potential questions arising from the utilisation of artificial intelligence (AI) in the healthcare settings. The importance of pre-evaluating ethical implications before implementation of new digital solutions into clinical practice is highlighted, as well as the need to update the educational pathways for both citizens and healthcare professionals.
e-Health has been defined as the use of information and communications technology (ICT) in support of health and health-related fields . Recently, the term digital health has been introduced as a more general definition encompassing e-Health as well as emerging areas such as the use of advanced computing sciences in “big data”, genomics and artificial intelligence (AI), to support and deliver health care and to improve the health and wellbeing of people .
These technologies are considered promising, as they would allow more personalised approaches to care, empowering patients, improved patient safety, better communication between care providers and patients, increased access to health information, better chronic disease management and prevention, improved efficiency of the healthcare system, improving access to scarce specialist skills and reducing patient referral, tele-education in the case of shortages of expertise, connecting patients even in remote locations, and in the management of the large amount of health data nowadays available .
While the potential of digital health solutions is wide-ranging, their design, development, deployment, and utilisation rely on business models and use of algorithms and data that need to be considered from an ethical point of view.
Generally, in medicine and nursing the basic principles of ethics concern respect for human life, respect for human dignity, autonomy, care, justice, and maximising beneficence. Digital health solutions are based on different finalities, thus extending existing ethical implications beyond what was accepted up till now .
In addition, the availability of new technology leads to the consideration of issues other than the conventional ones (i.e., privacy, confidentiality, and informed consent) that are specific and novel. In Table 1, a selection of publications that focus on the ethics of digital health tools in the last three years is reported chronologically. From this it is possible to appreciate the evolution of the principles that have been addressed and that have been considered a priority over time.
Table 1. Evolution over time in selected literature of the main ethical aspects concerning digital health.
Kleinpeter E. 2017 
Data protection, equality of service availability, medicine from the art of curing to the science of measurement, changes in patient-physician relationship.
Terrasse M et al. 2019 
Impact of social networking sites on the doctor-patient relationship, the development of e-health platforms to deliver care, the use of online data and algorithms to inform health research, and the broader public health consequences of widespread social media use.
Dealing with predictive and diagnostic uncertainty, roles and responsibilities of patients, role and responsibilities of physicians, patient-physician relationship.
Burr C et al. 2020 
Privacy, autonomy, accountability, intelligibility, accessibility.
Fenech and Buston. 2020 
Relationship, data, transparency and explainability, health inequalities, errors and liability, meeting public needs, regulation, consequences of novel insights, trusting algorithms, collaborations between public and private sector organisations, developing principles and translating them into policy.
Among them, in this paper, three main ethical implications relevant to digital health will be discussed, as well as the potential questions arising from utilisation of AI in the healthcare setting.
With the widespread availability of novel mobile medical technologies for ubiquitous monitoring, directly controlled and operated by the patient, often providing an onboard interpretation of the measured signals, in order to overcome the difficulties in integrating these systems in the clinical workflow , it becomes of paramount importance: 1) to know the exact claims, limits of validity, and reported accuracy for which the medical device or software has been approved; 2) to be sure that the device has been used as recommended, so that the patient-collected data can be considered as reliable; and 3) to have knowledge of possible updates in software that could have changed the performance of the device.
The first point recalls on the one hand the need for more transparency in the medical device certification process , and on the other hand the need for clinicians to be informed about performance and limits of usage of possible devices they could recommend to patients. As an example, the Apple smartwatch software that uses an optical sensor to detect the pulse wave and look for beat‑to‑beat variability at rest to detect possible atrial fibrillation (AF) is not designed for individuals already diagnosed with AF . Also, the ECG app in the same device, that uses the electrical heart sensor to record a 1-lead ECG and then provides a result (i.e., sinus rhythm, AF, or inconclusive), is not recommended for users with other known arrhythmias and is not designed for individuals diagnosed with AF, or to detect AF for HR >120 bpm .
The second point implies that the patient needs to have sufficient digital skills and health literacy (i.e., the ability to seek, find, understand, and appraise health information from electronic sources and apply the knowledge gained to addressing or solving a health problem ) in order to be able to properly use the device.
The third point refers to the fact that digital health solutions based on smartphone applications and use of embedded or connected sensors are prone to being modified via internet connection, to provide patches to possible bugs, to update to new operative systems, and to expand the features provided. In this process, where automated updating depends on the smartphone user’s setting, algorithm performances can be profoundly changed, thus requiring the need to verify which software version has been utilised to automatically interpret the data [10,11].
While in the past the healthcare professional was the natural intermediary in guiding patient access to health information, nowadays this role is functionally by-passed by new apomediaries (i.e., the Web, on-line groups, …) in a process of progressive disintermediation , with the ethical concern that incorrect ideas or potentially dangerous practices could take hold. In addition, the development of digital health poses the risk of shifting medicine from the “art of curing” to a “science of measurement”, where the inner life and feelings of the patient would be forgotten, and the communicative dimension of the medical act would be secondary to its informational dimension .
Nevertheless, to guide the patient on his/her digital health journey, there is a need for new professional learning paths for physicians to use new tools properly and to correctly interpret their results, as well as how to integrate them into medical practice ethically and deontologically.
The wide patient access to medical devices (other than the conventional thermometer and blood pressure monitoring device usually in everybody’s home), as well as lifestyle wearables, will lead to a situation in which patients self-control their health, confirm or disconfirm symptoms, activate a physician consultation or take immediate action.
This new scenario of shifting tasks and responsibilities could affect the roles that doctors have in healthcare, on the one hand stimulating new interactions between care providers and patients to understand health and its correlation with lifestyle better, and on the other hand creating in the patient a sense of independence, self-judgement and confrontation. This raises questions on whether valuable features of the traditional patient-physician relationship would become lost, if medical training would need to be rethought, or patient expectations redirected .
Health equity has been defined as the absence of discrimination or unfair health disparities, to be achieved by minimising such health disparities among groups of people who have different levels of underlying social advantage . In this context, digital health is considered as having the potential to resolve health inequalities. However, no availability of computer technology or limited access to the Internet, lack of the required skills and physical access barriers (which mainly affect low-income classes, the elderly, and people with disabilities) could represent a limiting factor for the accessibility of these new services to those categories that are expected to receive more benefit from it, thus exacerbating disparities in healthcare qualities and outcomes, so reinforcing what has been described as the “digital divide” [3,16].
In particular, the importance of health literacy (i.e., the degree to which an individual can access, process, and comprehend basic health information and services in order to inform and participate in health decisions) to cardiovascular disease management, prevention and public health has been addressed recently and underlined in a scientific statement from the American Heart Association .
Also, it has been reported that individuals with limited health literacy may have less access to reliable Internet-based health education materials . While the use of digital health solutions may represent an attractive option for patient-oriented education, text messages, and social networking to help with chronic disease management, a mobile health intervention study showed that racial or ethnic minorities, older adults, and those with limited health literacy were the least engaged with text messaging and automated calls .
AI represents a multidisciplinary area, including the fields of machine learning, natural language processing, and robotics, describing a range of techniques that allow computers to perform tasks that would usually require human reasoning and problem-solving skills. AI started more than 60 years ago in the field of computer science, but now, thanks to technological developments in both the field of artificial neural networks and powerful graphics processing units (GPU), it seems to have reached an adequate level of development to be applied in any field of medicine. Thanks to its ability to extract knowledge and learn from large sets of clinical data, AI can play a role in diagnostics, decision making and personalised medicine. At the same time, it creates a novel set of ethical challenges that need to be identified and mitigated .
On 10 April 2018, twenty-five European countries signed a Declaration of Cooperation on Artificial Intelligence (AI), with the goals of increasing public and private investment in AI, preparing for socio-economic changes, and ensuring an appropriate ethical and legal framework. Concerning this last aim, in June 2018 the Commission appointed 52 experts to a High-Level Expert Group on Artificial Intelligence (AI HLEG), including representatives from academia, civil society, as well as industry (but no medical associations, physicians, or patients were involved).
After publication of a first draft in December 2018 and the review of 506 comments received, in April 2019 the final voluntary and non-binding document on Ethics Guidelines for Trustworthy AI was published . In it, seven key requirements (i.e., human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) and four interrelated ethical principles (respect for human autonomy; prevention of harm; fairness; and explicability) were generally addressed. In Table 2, examples of questions arising, contextualising such principles into the healthcare domain, are given.
Table 2. Examples of questions arising contextualising the ethical principles addressed in the Ethics Guidelines for Trustworthy AI  into the healthcare domain.
Contextualised questions in the healthcare domain
Respect of human autonomy
Prevention of harm
At the basis of the development of AI in healthcare, there are questions around its regulatory environment. Currently, the concept of software to be used for medical purposes, such as prediction and prognosis, has been added to the medical device definition of the Medical Device Regulation , thus in principle covering AI under this regulatory umbrella. However, no specific characteristics of AI have been addressed in such regulations or related guidance documents.
More recently, the European Commission published a “White Paper On Artificial Intelligence - A European approach to excellence and trust”  to set out policy options on how to achieve promoting the uptake of AI and how to address the risks associated with certain uses of this new technology, strongly supporting a human-centric approach.
Considering the lack of current requirements regarding transparency, traceability and human oversight, the creation of a clear European regulatory framework for trustworthy AI, to be applied on related products and services, is keenly anticipated. Results of these regulatory efforts will be visible in the near future and will allow a better evaluation of how to integrate AI properly as a digital health tool in healthcare practice, hopefully overcoming the limitations of current implementations .
Technological progress is part of our society, and it is only natural that it impacts the domain of healthcare domain. To properly cope with new digital health tools, it is important to pre-evaluate their ethical implications before their implementation into clinical practice. In this way, the regulatory framework, as well as the educational pathway for both citizens and healthcare professionals, can be updated in order to guide the process instead of being affected first, and only, after the fact, searching for a remedy.
Enrico G. Caiani, MS, PhD, FESC
Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria, Milan, ItalyInstitute of Electronics, and Information and Telecommunication Engineering, Consiglio Nazionale delle Ricerche, Milan, Italy
Address for correspondence:
Prof. Enrico G. Caiani, Piazza L. da Vinci 32, 20133 Milano (MI), ItalyE-mail: firstname.lastname@example.org
Professor Caiani has received honoraria as speaker/consultant from Medtronic International Trading Sarl, Merck & Co., Inc, and Novartis.
Our mission: To reduce the burden of cardiovascular disease.
© 2020 European Society of Cardiology. All rights reserved.