This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minute read

Could AI predict your future health?

Healthcare continues to be a major focus for AI research and investment, from detecting and diagnosing disease to formulating new hypotheses for potential causes and cures (or at least spotting correlations).

Detection

"Wearables" have already begun to change the way we track our health, but as well as helping to keep known conditions under control, tech can also help alert us to problems we might not have noticed.

For example, last week Stanford University released preliminary results of its Apple Heart Study, which studied participants using an Apple Watch and app to monitor their heart rate.  Only 0.5% received "irregular pulse" alerts, but 34% of those were later found to have atrial fibrillation, a condition which often remains undiagnosed as it doesn't always cause noticeable symptoms.

Assessing the risk

Doctors already use known correlations to help predict their patients' health risks, so it is no surprise that AI is finding a role.  I have previously written about how AI is well-suited to analysing large data sets; this ability means AI can generate a "polygenic risk score" based on a number of points across a person's genome.  For example, patients with a top 2.5% risk score for coronary heart disease based on their DNA were found to have four times the average chance of developing coronary plaque.

These predictors enable doctors to take a patient's polygenic risk into account in the same way as other genetic and lifestyle indicators.  However, the risks of indicative scores being taken as predictive certainties are worrying.

Statistics, not predictions

In part, this is because any hypothesis is only as good as the underlying data.  Where a DNA database is made up primarily of, for example, white people of European ancestry, models based on that database will perform less well for other populations. Remote data gathering and distributed learning techniques may help to alleviate data bias to some degree, but there will always be a need for caution.

It may also be harmful to quantify risk in this way.  The same tech that might prompt a user to seek medical help or make lifestyle changes could also promote unnecessary tests (a particular risk where healthcare is commoditised), create a false sense of security that discourages preventative action, or simply leave people unnecessarily concerned.

Schizophrenia, for example, is highly genetically influenced: if one identical twin develops schizophrenia, there is a 50% chance the other twin will too.   Yet some experts feel it would be reckless to give apparently healthy people access to their genetic risk scores: while both twins have the same DNA and the same polygenic risk score, there is still a 50% chance that a schizophrenia prediction would be false.

"Virtual" studies like the Apple Heart Study offer a glimpse into how healthcare might one day look: practitioners and researchers can collect data continuously as patients go about their daily lives, spotting irregularities which might not have shown up within the usual ten-minute appointment - and follow-up medical consultations can even take place remotely - but while all this data opens up exciting opportunities, it is important that developers remain mindful of the individuals behind the statistics.

In the UK, five new government-funded technology centres will open in 2019, using AI to accelerate disease diagnosis with the aim of making the National Health Service more efficient.

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ai, healthcare, healthtech, wearables