In order to bring you the best possible user experience, this site uses Javascript. If you are seeing this message, it is likely that the Javascript option in your browser is disabled. For optimal viewing of this site, please ensure that Javascript is enabled for your browser.
Did you know that your browser is out of date? To get the best experience using our website we recommend that you upgrade to a newer version. Learn more.

Hot Line - AI assessment of LVEF is superior to sonographer assessment: Results from the EchoNet-RCT trial

Human interpretation of transthoracic echocardiograms for the assessment of LVEF is subject to high interobserver variability. Artificial intelligence (AI) has been proposed as a way of improving accuracy, but it has not been evaluated prospectively.

In a Hot Line session yesterday, Professor David Ouyang (Cedars-Sinai Smidt Heart Institute - Los Angeles, USA) presented data from the EchoNet-RCT trial, which compared an AI model head-to-head with sonographer evaluation for LVEF assessment. The AI model, EchoNet-Dynamic, is a deep-learning algorithm trained on echocardiogram videos across multiple cardiac cycles that has previously been shown to assess LVEF with a mean absolute error of 4.1–6.0%.1

In the prospective, double-blind EchoNet-RCT trial, transthoracic echocardiograms performed on adults for any clinical indication were randomly allocated 1:1 to initial assessment by AI or by the sonographer. Cardiologists subsequently reviewed the assessments in a blinded fashion and provided a final report of LVEF. The primary endpoint was the frequency of a >5% change in LVEF between the initial assessment and the final cardiologist assessment. The trial was designed to test for noninferiority, with a secondary objective of testing for superiority.

Among 3,495 transthoracic echocardiograms included, fewer were substantially changed by the cardiologist in the AI group (16.8%) compared with the sonographer group (27.2%) (difference −10.4%; 95% CI −13.2 to −7.7; p<0.001 for noninferiority; p<0.001 for superiority).

The safety endpoint was the difference between the final cardiologist report and a historical cardiologist report. The mean absolute difference was 6.29% in the AI group and 7.23% in the sonographer group (difference −0.96%; 95% CI −1.34 to −0.54; p<0.001 for superiority).

Dr. Ouyang comments, “We learned a lot from running a randomised trial of an AI algorithm, which hasn’t been done before in cardiology.” And about the implications of the findings, he says, “What this means for the future is that certain AI algorithms, if developed and integrated in the right way, could be very effective at not only improving the quality of echo reading output but also increasing efficiencies in time and effort spent by sonographers and cardiologists by simplifying otherwise tedious but important tasks. Embedding AI into clinical workflows could potentially provide more precise and consistent evaluations, thereby enabling earlier detection of clinical deterioration or response to treatment.”


1. Ouyang D, et al. Nature. 2020;580:252–256. 

The content of this article reflects the personal opinion of the author/s and is not necessarily the official position of the European Society of Cardiology.