The image below, from the Journal of the American Medical Association, is a child's depiction of her healthcare experience: she is sat on the examination table and her doctor on the computer, turned away from her.
Healthcare is just one example of an industry where data is incontrovertibly useful, but where, perversely, the sheer (and, with the advent of wearables, increasing) volume of such data can prohibit its effective use. AI can provide a solution to this problem.
Studies have shown AI to be as good as human doctors at diagnosing certain conditions, such as breast cancer - analysing large data sets to identify patterns and derive predictions is a task well-suited to a machine. However, this data-driven approach will still benefit from human input. A doctor who knows their patients well can tailor recommendations to reflect what is realistic for a particular patient, and can respond to another patient's vague feeling of being "under the weather" or simply "not quite right", in a way that AI cannot.
As with all AI implementation, though, there are questions to be asked, importantly in relation to professional ethics. While better-informed doctors and patients can surely only be a good thing, AI can provide interactive, responsive recommendations as well as the existing one-way information and guidance. As healthtech becomes more sophisticated, who will have the ultimate say? What will be the scope for a doctor or their patient to choose to deviate from the approach suggested by the AI and - ultimately - who will be held accountable if the AI gets it wrong?
AI in the exam room opens up the chance to recapture the art of medicine. It could let me get to know my patients better, learn how a disease uniquely affects them, and give me time to coach them toward a better outcome.