In order to bring you the best possible user experience, this site uses Javascript. If you are seeing this message, it is likely that the Javascript option in your browser is disabled. For optimal viewing of this site, please ensure that Javascript is enabled for your browser.
Did you know that your browser is out of date? To get the best experience using our website we recommend that you upgrade to a newer version. Learn more.

Data-driven personalised medicine

ESC Cardiovascular Round Table (CRT) Plenary Meeting

4 October 2019, Tallinn, Estonia 

Moderators: Ulf Landmesser (Charité Universitätsmedizin Berlin), Isabelle Lordereau-Richard (Amgen).

Speakers: Folkert Asselbergs (UMC Utrecht), Mikhail Zaslavskiy (Owkin), Pedro F. Monteiro (University Hospitals of Coimbra), Ulf Landmesser (Charité Universitätsmedizin Berlin).

Personalised medicine for cardiovascular disease is at an early stage. The private sector is ahead in targeting consumers, but to take two examples, Facebook and LinkedIn are standalone platforms without interoperability issues. This is cardiology’s main issue – the jungle of platforms, phenotypes, and coding. The challenge is how to unlock these data for use with artificial intelligence (AI).

Oncology has more advanced precision medicine than cardiology – by characterising tissue biopsies and analysing gene expression, targeted therapy is administered based on an individual profile. In contrast, heart failure is a mixed bag diagnosis treated, in many cases, with a one-size-fits-all approach.

Taxonomy in cardiovascular disease management needs redefining to enable targeted treatment that considers underlying aetiology and specific disease manifestation. In future, these phenotypes are key for designing trials and developing treatment guidelines. The ESC could take the lead by setting up a portal of phenotypes. If done in collaboration with the US Food and Drug Administration (FDA), this would allow international comparisons of real-world data. There are steps in this direction – the Innovative Medicines Initiative (IMI) BigData@Heart consortium added a big data definition of atrial fibrillation to the International Consortium for Health Outcomes Measurement (ICHOM) standard set of outcomes.

Precision medicine in cardiology today encompasses risk calculators but because statistical probabilities are derived from populations, it remains difficult to predict individual risk and how that translates into cardiovascular events. One aim is to create a digital twin, whereby individual patients are plotted into datasets like the UK Biobank. AI or regression analysis would predict the most probable disease course and stratify patients for the most appropriate treatment. But personalised medicine is not just about clinical variables; it also depends on the exposome, which covers chemical, environmental, and lifestyle exposures.

University Hospital Coimbra is using the DEEP platform, an AI-based clinical risk prediction model that uses routine electronic health record data. It aims to identify individual risk of an event and risk drivers that can be managed, leading to personalised treatment plans. Prediction capability incrementally increases with the number of data points used in the algorithm. Acute coronary syndrome (ACS) patients are better stratified into low, medium, and high risk compared to conventional clinical risk scores. Within a risk category, the predominant risk driver (hypertension, lipids, etc.) helps to tailor treatment.

Owkin, a company that creates AI technologies, uses hospital data to train and validate predictive models. The aim is to discover biomarkers that predict survival or response to treatment. The information can be used to optimise clinical trials – for example removing patient heterogeneity to better detect a treatment effect, finding subgroups of good responders, and leveraging real-world data to replace control arms. Privacy is maintained with a federated learning approach, whereby data is used locally and only the model is transferred. When dealing with data from multiple centres and countries, interoperability and harmonisation are the biggest challenges – processing and alignment of datasets take up to 90% of project time.

Proof is needed that AI-based personalised medicine improves outcomes before it can be integrated into clinical practice guidelines. As for recommending wearables and sensors, all devices need to be tested for specificity and sensitivity and caution is needed to ensure patients do not skip important check-ups because a wearable did not sound an alarm.

Discussion points

  • AI may lead to better phenotyping. But is there any example from cardiovascular medicine where this improved phenotyping and personalised medicine based on it has led to better outcomes? No clinical trial has tested this. The point is that you cannot do AI without the phenotypes. Unsupervised clustering does not work – people use different clusters so scalability and generalisability across cohorts is poor.
  • Oncology is much more advanced from a precision medicine perspective – they are doing a trial to show it is better than standard of care. Cardiovascular is still at the stage of developing the AI-based algorithms. That can be the basis for clinical outcome studies.
  • Does better risk assessment change management and save costs for healthcare providers including hospitals?
    • Coimbra used the new risk stratification system in prospective ACS patients to direct treatment objectives towards risk drivers. Comparing six months of data after AI implementation with six months of historical data, there was a 10% decrease in readmissions.
    • Preliminary work in a clinical trial database showed the algorithm was three times better at risk stratifying high versus low risk patients for a given therapy than the classical approach.
  • Regarding use of risk drivers to choose therapy, we have to be careful about changing treatment algorithms. For example, lipid lowering is effective in patients with diabetes and should not be withheld. All patients receive guideline-recommended treatment. But in patients for whom optimised use of guideline therapies is insufficient to become low risk, Coimbra uses the algorithm to identify the main risk drivers and chose second line therapies.
  • Patient and physician behaviour affect outcomes – can AI improve this? Yes – AI-based algorithms can help us predict and improve patient and doctor compliance.
  • Expectations for AI in healthcare should not be set too high. Supervised learning like image processing and ECG interpretation may replicate human activity and save physician time. But the search for new relationships might be effective in one case out of 100 because it will need testing in a randomised controlled trial (RCT), which is expensive and time consuming.
  • The added value of machine learning is when there are multiple data sources to link. With three to four variables, regression analysis is more powerful and easier to interpret – so it is important not to overhype these new methods. Another issue is how to interpret black block machine learning.
  • RCT subgroup analyses identify populations that may particularly benefit from treatment. In future will we use AI to identify these patients? Do we have to prove this in prospective RCTs? One of the first steps is to use AI algorithms based on real-world data to choose inclusion criteria for trials and identify patients.
  • Regarding data security, the federated approach to AI is the way forward, but even with coding the data might be identifiable. Data is kept locally to avoid compromising privacy. Weights and basic information about the model are transferred. For example, from data on 1,000 patients with 10 features, we share 10 values characterising the impact of each feature on the outcome of interest. Sharing global characteristics once is not dangerous but when done iteratively it can lead to leakage of underlying data.
  • After identifying new prediction factors with machine learning, validation is an essential step. Does it make sense? Or is it due to bias in the data used to build the prediction model.

Conclusion

Unlocking cardiovascular data is the bottleneck in personalising treatment in cardiology. Efforts are needed to improve interoperability and harmonisation, so that multiple data sources can be used in predictive models. Proof of improved outcomes is required before adoption into clinical practice.

The content of this article reflects the personal opinion of the author/s and is not necessarily the official position of the European Society of Cardiology.