by futurist Richard Worzel, C.F.A.
Imagine you had a rare disease that was going to kill you, but neither your doctor, nor any of the doctors you’ve been referred to or seen, knew anything about it or had ever encountered it. As a result, you would likely die.
Unless…
Unless an Artificial Intelligence correlated data from several million pages of medical research to identify this obscure disease, and proposed specific tests to determine if this was, indeed, what was afflicting you.
This is one of the many ways in which Artificial Intelligence (“AI”) will affect your health. Indeed, AI will eventually permeate all walks of our lives, business, and society – it will become the most important technology development in decades, possibly in the whole of human history. But rather than look at a broad range of applications (as I’ve done in some earlier blogs, like this and this), let’s focus in on the ways AI will affect your health in the next decade, between now and 2030.
AI as Your In-Home Doctor
My mother used to keep handy an encyclopedia of medical conditions, and draw it out whenever one of the family presented with symptoms of something with which she was unfamiliar. Today, doctors dread Internet experts who come in with print-outs they feel are related to the symptoms they believe they’ve experienced. These are typically drawn from some medical website, and people bring them to their doctor in order to justify why they need a particular prescription or treatment.
So, the future will be both better and worse. Your personal genie will have access to far more sophisticated resources to enable you to be diagnosed at home, and will be able to monitor your health at all times, alerting you if your health changes in some way, probably before you notice it.
It will provide a solid first assessment of what’s going on with your health, and should tell you that you need to do one of three things: (1) do nothing; there is insufficient information to warrant action, your condition is benign, or it’s too early to tell if something important is happening; (2) you are likely to have a minor health condition, such as a cold, that warrants watching, but at the moment can be treated either with an over-the-counter drug or warrants a test you can get at your local pharmacy, but nothing more; or (3) your condition may be significant or serious, or is sufficiently ambiguous that you should seek help from a health care professional.
In the last case, you will probably be directed to your pharmacist, a nurse-practitioner, or your doctor, depending on the risks involved, and how probable this initial diagnosis is. Or, in the event of a life-threatening condition, such as a heart attack or stroke, your genie may call for an ambulance and alert your doctor before you’re even aware that something is happening.
While all of this sounds good, it may also lead to people running to their doctor’s office more frequently as such diagnoses will uncover many minor complaints that you might never have noticed otherwise. The silver lining in this is that whichever health care professional you see won’t have to start from scratch; your personal genie will provide the reason for the visit, along with a detailed list of your vital statistics and the observed symptoms.
This will become your first line of defence in managing your health, sort of a doctor-in-a-box.
Predictive Diagnoses
As medical databases of electronic patient records are amassed, more and more predictive models of life-changing or life–threatening conditions will emerge. These will start with things like your age, gender, location, and lifestyle, but then graduate to more subtle things: What kinds of foods you eat. Where you live relative to weather patterns that may blow airborne pollutants towards or away from your home. When your home was built, and with what materials. How much sunlight you get. Whether you sleep on your side, back, or front. Whether you sleep alone or with someone. And much more.
Indeed, we will almost certainly be surprised at the things that make a difference in our health over an extended period of time.
But as we accumulate data on the general health of people in a particular area, who eat specific foods, maintain certain kinds of lifestyles, or have specific genetic characteristics, we will develop increasingly intricate and sensitive predictive models of your future health. These will be downloaded to your personal genie, which will then be able to watch for specific symptoms or health indicators that you might otherwise never have considered. And if such indicators appear, your genie will then be on watch for further indicators, good or bad, that may foreshadow a change in your health.
This will allow ever-earlier warnings of potential threats from things like diabetes, heart conditions, or osteoporosis, and allow pre-emptive actions to prevent threats from developing, or being as severe if they do occur.
And the feedback between health databases and your personal genie will be a two-way street. If you exhibit novel or unexpected symptoms, they might indicate a new disease, and provide an early warning of a new, emerging threat. This will allow health authorities to take early steps to stop the spread of something like the coronavirus that jumped from obscurity to being a global epidemic in early 2020.
AI-Assisted Diagnoses
An audience member once asked me whether AI-based diagnoses would put oncologists out of business because of AI’s superior ability to identify cancers. My reply was that AI won’t replace oncologists, but that oncologists who use AI will replace oncologists that don’t.
I believe this cobot model of human-AI cooperation will prevail, at least in the medium-term, because humans are good at certain things that AIs are not, and vice-versa. The combination of human and machine, therefore, will be more powerful than either on their own.
So, when you do meet with a health care professional for any given reason, you are likely to experience them using AI to support and extend human knowledge, experience, and judgment. And that will mean better, faster, more accurate diagnoses, as well as the observation of obscure indicators, prescription of little-known tests or drugs, and the application of the latest and greatest research to your situation.
It will also reduce the frequency of medical mistakes at all levels. While the actual figure of deaths caused by medical mistakes is both highly uncertain and controversial, having an AI provide a second assessment of a given treatment, diagnosis, or prescription would almost certainly reduce the death rate, even if only by causing a physician to re-examine and re-consider their judgment.
Early Indicators and Collateral Symptoms
Dentists are often the first health care professionals to know when a woman is pregnant, because pregnancy can cause mucous membranes in the mouth to swell.
Several years ago, I was experiencing lower back pain. Given the nonspecific nature of such pain, I decided that I would go to a massage therapist rather than my doctor. The therapist very quickly told me that my shoulders were extremely tense, and, after finishing my massage, told me that my lower back pain was probably being caused by the tension in my shoulders. Apparently, I was holding my shoulders up high, near to my ears, which was tiring the muscles in my lower back, and that produced the soreness. She suggested that I focus on relaxing my shoulders, and that, plus additional massage sessions and an increase in my yoga practice, caused my back pain to disappear.
The point is that body is an integrated system, and not a group of isolated machines that work independently of each other. As a result, signals in one part of the body may indicate issues in another.
One recent example is an emerging diagnostic system called AlzEye:
“An unusual research project called AlzEye, run from Moorfields Eye Hospital in London…is attempting to use the eye as a window through which to detect signals about the health of other organs.…This will allow them to look for telltales of disease in the eye scans.”[1]
As electronic patient records become more widespread, and data mining proceeds (hopefully with proper legal and ethical considerations, as happened with AlzEye), we will find more circumstances where indications in one part of the body may signal issues or problems elsewhere.
Hence, AlzEye examines an image of the eye’s retina, which is cheap to produce (currently less than £30 or about US$40), and may provide early indications of problems like heart disease, stroke, and Alzheimer’s disease. Since there are currently no cures for Alzheimer’s, but the progression can be slowed or stopped, early diagnosis is incredibly desirable, but until recently, could be expensive.
As we learn more about the body, and start to grasp the interconnections from one part of the body’s system to another, we will be able to develop new, cheaper, more reliable early indicators of problems emerging elsewhere.
Automated Record-Keeping, Billing, and Administration
Most people will be surprised that I would mention something as pedestrian as administration and billing in the same breath as AI in health care. Yet, these things eat up an incredible amount of time in a physician’s day that could be spent with patients instead:
“For every hour physicians provide direct clinical face time to patients, nearly 2 additional hours is spent on HER [Electronic Health Records] and desk work within the clinic day. Outside office hours, physicians spend another 1 to 2 hours of personal time each night doing additional computer and other clerical work.”[2]
If this office-overhead time were freed up, it would give physicians, and all health care workers, more time with patients, as well as reducing physician and patient stress in coping with arcane and punishing bureaucracies.
Moreover, in the U.S. at least, the complex, rule-based bureaucracies involved in billing for private health insurers, Medicare, Medicaid, and the Veterans’ Administration are a nightmare that AI would be better suited to managing than most humans.
And health care workers, including physicians, could have their genies watch, listen, and interpret what treatments are scheduled, what should be billed, and what patient notes need to be taken, recorded, and communicated, then prepare them for review and approval at the end of each day. This is precisely the kind of human-machine symbiosis I expect will emerge.
Emergency Room Triage
Imagine that you have a serious, urgent health issue, and go to your local emergency room, only to find it jam-packed with lots of people with health issues big and small. Will you get the attention of someone who can help you quickly? Perhaps. But then again, perhaps not, depending on how well the ER is run.
Now imagine that there’s a diagnostic AI that not only interviews each person as they walk in, but also looks at them, using visible & infrared light, checks their pulse, listens to their heartbeat and the ease of their breathing, looking for signs of distress or wounds, consults their personal genie, and evaluates their vital signs based on what it has observed. Someone with an urgent health issue could be referred to a human immediately for a more thorough evaluation, while someone with a health issue that is not as time sensitive could be prioritized, and perhaps given a time estimate as to when they would likely be seen by a health care professional. Then, if they so choose, they could come back at the appointed time, and be seen on a priority basis.
And if my first projection, for an at-home AI, doctor-in-a-box were available, it could transmit symptoms to the ER, and receive a priority number, and a projected time for you to appear at the ER, with the result that when you walked in, you would be seen almost immediately.
This could streamline ER operations, particularly if clinicians worked with the AI to improve its ability to assess and triage incoming cases. It could also save lives, and reduce waiting room aggravation.
Imaging, Augmented Reality, and Robot Surgery
The art and science of surgery will benefit enormously from AI. AI will be able to review images of an organ, a tissue sample, or a trauma and, as discussed earlier in regard to diagnoses, help the surgeon assess and diagnose the situation, and select the best kind of surgery or best surgical technique from the relevant literature, including up-to-the-minute research studies.
As an early example of more timely tissue assessments, a recent study about the use of AI in diagnosing brain turmors while surgery was in progress showed that AI in the operating room can take a laser scan of a tissue sample, and come up with a diagnosis within 2 ½ minutes. This compares with 30 minutes or more using traditional, rushed lab techniques that are less accurate. In fact, the AI diagnoses performed in the operating room are as accurate as the best diagnosticians using slower, more extensive tests performed aftersurgery: 94.6% accuracy for the AI vs 93.9% for humans using more extensive tests after surgery – a statistical tie.
One of the study’s authors (and an investor in the company making the imaging system), Dr. Daniel Orringer, commented that the study says that “the combination of an algorithm plus human intuition improves our ability to predict diagnosis…If I have six questions during an operation, I can get them answered without having six times 30 or 40 minutes…[This] won’t change brain surgery,” he added, “but it’s going to add a significant new tool”.[3]
However, in addition to the diagnostic part of surgery, AI will become invaluable in the actual performance of surgery. It will be able to identify tiny details that might be overlooked, or integrate indicators into a broader assessment as the surgery progresses and new parts of the site of the surgery are revealed. It will allow the surgeon to look through the tissue using Augmented Reality, using the various kinds of imaging techniques available, to enable the surgeon to look ahead to what they might encounter as the surgery progresses.
And the AI might actually perform the surgery in cooperation with the human surgeon. The human surgeon could speak their intentions, and even guide the surgical robot as if they were performing the surgery. The AI would interpret both the verbal instructions and the physical manipulation, and move the surgical instruments, implementing what the human surgeon wants to do, but with greater speed and precision than a human could perform them.
The AI could also offer additional information or opinions, based on the medical literature, and case studies of similar surgeries, so that the human surgeon has the most relevant and most timely information at their disposal as the surgery progressed.
.
As I’ve said in earlier commentaries, AI is not easy to establish. It takes lots of good quality data, an excellent understanding of the analysis involved, and a clear set of objectives to produce a well-functioning AI. But the benefits of AI in health care will become so compelling that companies in the health care space will race to throw money at these things, confident that they will recoup the cost many times over.
And the seven examples that I’ve provided here are really just the tip of an enormous iceberg. As we become more accustomed to using AI, and as human intuition comes to bear on how to use AI, newer, ever-more powerful applications, things that today would sound like wild-eyed science fiction or fantasy, will emerge.
Within 20 years, we will have a hard time imaging how we ever managed health care without artificial intelligence. It will literally be a life-saver.
© Copyright, IF Research, February 2020.
[1]“iScanning: A system based on AI will search the retina for early signs of disease”, The Economist, 21 December 2019, pp.119-120.
[2]https://annals.org/aim/article-abstract/2546704/allocation-physician-time-ambulatory-practice-time-motion-study-4-specialties
[3]Grady, Denise, “Speedy and Unerring, A.I. Comes to the Operating Room”, New York Times, 7 January 2020, p.B6.