In order to bring you the best possible user experience, this site uses Javascript. If you are seeing this message, it is likely that the Javascript option in your browser is disabled. For optimal viewing of this site, please ensure that Javascript is enabled for your browser.
Did you know that your browser is out of date? To get the best experience using our website we recommend that you upgrade to a newer version. Learn more.

Will AI Be Able to Exploit the 'Blind Side' of Human Skill in Cardiovascular Imaging: from Being Artisans to Becoming Pilots

Coronary Artery Disease (Chronic)
Cardiac Computed Tomography
Cardiac Magnetic Resonance

Keywords: Cardiac; Vascular; Cardiovascular; Imaging; Cardiac Computed Tomography; Coronary Computed Tomography; Coronary Artery Disease; Cardiac Magnetic Resonance; CMR; Atherosclerosis; CAD; CCT; CTA; diagnosis; prediction; prognosis

 

Medicine always has been - and still is - a science in which strong background preparation, experience, a sixth sense, empathy, constant updates in knowledge, and luck mix together.

Guidelines from large randomised controlled trials (RCT) and meta-analyses are considered the backbone of modern medicine, practicing, experienced MDs know those guidelines are only valid until they meet individual patients who are often different than the ones cited in guidelines. Mathematics, physics, chemistry, biology, engineering…, are based on stronger rules and laws than medicine, which is a combination of several different sciences. Medicine is an art, very fluid and difficult to understand. Suffice to say that we still do not totally understand all the language of medicine (i.e. RNA, DNA) and the proper interactions/constructions involving the human body, let alone understanding brain function.

The hype on artificial intelligence (AI) is enormous and growing, especially over the last two to three years. AI was already a specific entity about 20 years ago but now we are on the eve of clinical implementation. Every day we read in major newspapers how new AI algorithms perform better than "humans" at certain tasks, with particular prominence in medicine. In addition, the media strategy to gain visibility is based on baiting; therefore in order to be seen, you have to shoot bigger and bigger.

Without doubt there is a financial interest in putting in the minds of people the idea that "machines" will replace doctors very soon (which for the average, non-medical person means tomorrow). Healthcare is very expensive; with costs and demands increasing, machines will eventually be cheaper. To exploit this concept, they push the envelope, claiming that a is never wrong and/or much better than humans. But this is wrong both in terms of methodology and aim. Actually, the first step should be to demonstrate that AI would deliver non-inferior clinical results (besides other collateral advantages). Outcome should be measured; AI should undergo conventional scientific validation with RCTs.

The more prosaic reality is that in today’s medicine AI and its branches - e.g. machine learning (ML), deep learning (DL) - are starting to be challenged with simple tasks in super-selected environments, and they show good results. Small validation studies in hyper selected cohorts have been published in major journals pushing the idea that AI will be a substitute for human-driven medicine. Probably the Dunning-Krueger effect is playing a role here: the less you know, the more you think you know. Then the question becomes: is there already a role for AI in cardiac imaging?

There has recently been a comprehensive review on machine learning (ML) in the European Heart Journal focused on cardiac imaging (1). The main aspect that is highlighted is that AI can allow and/or improve prediction. As a first observation, I wonder whether we need better prediction in the first place: we definitely need better diagnosis. Eventually a better prediction might be useful.

As a premise, we have to recognise that for decades, hardware and software companies involved in cardiac imaging delivered less than optimal software for calculation and automation, and I am being polite. Starting 25 years ago, advanced cardiac imaging was becoming clinically viable: i.e. with single-photon emission computed tomography (SPECT); cardiovascular magnetic resonance (CMR); and soon after cardiac computed tomography (CCT). We had plenty of expensive software and workstations in most imaging departments, used predominantly to visualise images and not to measure or extract quantitative data (with many exceptions). Even today we have many more software applications than we can use in clinical practice.

Why was it like that in the past? The reason was that these tools were slow, unreliable, operator-dependent and not reproducible (inter/intra-operator and inter/intra-imaging method), and not adequately validated in large populations: they did not talk to each other (i.e. lack of inter-operability).

What is different today? Basically nothing, except that computer power is much bigger, availability of digital data is much bigger, access to powerful technologies is much wider. Also knowledge of algorithms and medicine has progressed.

On a personal note, I would like to stress that software applications were often (and still are) designed  predominantly by computer scientists and engineers without integrating the design process into clinical reality and workflow. They were not suitable for clinicians - meaning you had to hire and train personnel to do the work at a high cost. This might work adequately for academic purposes and big trials, but it did not fit into clinical realities where cost, time, robustness, reliability, and numbers are very important. The scenario is improving, but there are no quantum leaps on the horizon.

Where are the imminent breakthroughs for clinical implementation of AI in cardiac imaging?

Mostly in areas where tasks can be accelerated and made independent of human interactions, at least at the level of patients’ management and pre-processing. A classic example of this is the calculation of left ventricular (LV) morphological and functional parameters in CMR. The task of LV quantification was always part of every CMR exam, but it was very time-consuming and required massive supervision/correction (in a non-negligible percentage of cases, it was an extenuating manual segmentation task). Today, software has completely automated this task, and I have to say that because of AI the amount of correction needed is negligible, even for the right ventricle (which is much more difficult compared with the LV). After 30 years of CMR, developing automated LV morphological and functional parameters has resulted in much faster reporting and more time to think about image interpretation. Nice! But no actual advantage for the patient.

Automation in the workflow process - prior to and during the actual scan - is another field of AI expansion and derived applications that is overlooked and not advertised adequately. The cost of personnel (nurses, technicians, assistants) at that stage is quite high because so much information has to be gathered prior to performing high-quality examinations and to adapting scan strategies/parameters for individual patients. Most of this information is already available somewhere (e.g. patients’ files; institutional databases). Instructing the patient correctly is as important as collecting proper clinical files for accurate reporting. All these tasks, including the choice of scan parameters, have already started to be assisted and will soon be automated. In fact, it is easier to automate the  scanning process (with only external human supervision) than to automate a clinical report. We already follow similar kinds of automation at grocery stores, in automobiles, and with online banking, so it should not be too complicated.

Training expert operators in advanced cardiac imaging is a difficult and intensive task. If at some point multiple steps of the process could be automated, there would still need to be expert operators to supervise constantly, conduct periodic quality checks, and intervene/override in case of discrepancies, malfunctions, and simple disfunction/inaccuracies. This is not a trivial issue because there is going to be a non-negligible percentage of cases in which automation will not work properly: if fewer operators are trained because of automation, what will become of intensive training? There is already a shortage of expert cardiac imagers; to better understand the problem, imagine the same issue in the field of surgery.

This last issue opens the gate to the even bigger issue of liability. When mistakes are made by algorithms and not humans, what happens? When interpretation of findings is inaccurate or wrong and subsequently wrong decisions are made, what happens? Who is going to be held responsible? If the AI is in the cloud storage and physically in another country, how will local regulations of liability apply?

Local specific laws in place today will need to be modified to allow automated tools driven by AI to be used. The matter is complex and will need open discussion within scientific societies and regulatory bodies. Without going too much in depth, there is also a massive issue concerning regulations on informed consent and even more on privacy.

Notes to editor


Declaration of Interests: The author(s) have declared no conflicts of interest.

The content of this article reflects the personal opinion of the author/s and is not necessarily the official position of the European Society of Cardiology.