I guess most people in the community know the story of Watson Health. In brief, it is IBM’s ambitious project targeting the hardiest or almost the hardiest problem for medical IT technologies, i.e. clinical decision making, however, seems unsuccessful so far. Two months ago, there was a rumor that IBM is planning to sell Watson Health. Should a challenge be considered to boost the implementation of Dr. AI?
It is my understanding that in China there’s already a form of a “Dr. AI” that is performing better than Watson Health for some very specific problems.
Maybe the challenge should indeed be more about the implementation of said systems, rather than ‘just’ making them more accurate.
@sarahb, @sjatkins, @ross_d_king, could I ask for your feedback on this? Do you think it’s viable areas of focus for our Global Visioneering program, which will culminate in 10 ideas for $10M XPRIZE competitions? If so, please give this a vote!
More info about Global Visioneering here.
As @Roey says there are aspects of this to explore including:
- The technology/model development
- The data and bias checking (we don’t want Dr AI only to serve subsets of the community and cause harm to others)
- The implementation (how should it be used, how do humans and Dr AI work together, risks, checks, potential harms, feedback mechanisms)
I think it’s a solid candidate.
Hey @NellWatson , welcome to the Community! If you like this topic you could vote for it (Click on the vote button below the discussion title).
Any thoughts that you would like to share on why this topic is important and what challenges it could solve. Thanks.
The value of a health diagnostic tool is much greater than enabling people to access basic health information. It could also be used to analyze regional and global health trends to assist in identifying new virus variants at an early stage, or an unusual cluster of symptoms - such as the Flint Michigan lead poisoning situation. To be effective there needs to be a two-way exchange of information that respects privacy and reliability of information entered into the database.
The use of AI in health (and biological research) will be one of its greatest outcomes! There is already a lot of work taking place in this sector, but perhaps the greatest opportunity for future AI in health, science, innovation and other applications is this… [?]
A network of interoperable AI subsystems, whereby an “executive” level AI subsystem can call upon multiple specialised AI subsystems to provide comprehensive analysis, good accuracy in its predictions and recommendations for optimum outcomes.
A standard for AI interoperability would allow AI subsystems to dynamically collaborate on a specific task. It would potentially address aspects of bias and accuracy, by deriving conclusions from multiple systems and data-sets. The resulting system might have similarities to Internet standards, and its associated benefits.
Such an approach has the potential to be incredibly powerful!
I like this one. It’s basically “hyper automation” for healthcare and medicine, which is something that’s long overdue. It’s much more difficult than it may look at first glance, but its value is very clear, both for developing and developed regions.
@akb - If this were the goal of a new XPRIZE competition, how would you define “success” for the participants? What product would they have to demonstrate and how?
Good question The challenge could include the demonstration of an executive AI system calling upon multiple AI subsystems:
- across different sectors (to enhance ability, “intelligence”, and common sense)
- multiple subsystems in a given sector (to reduce bias and increase accuracy (and robustness))
This might require the definition of a standard (protocol) and the participation of multiple AI subsystem prototypes (with data-sets across various sectors). As a minimum example:
- 1 executive (with human interaction interface (e.g. web page))
- 2 subsystems (with independent data-sets) in sector / knowledge domain X
- 2 subsystems (with independent data-sets) in sector / knowledge domain Y
Success might be an outcome that is significantly “smarter” than today’s state of the art standalone AI systems. I might have to ponder how we objectively measure “smarter”
Hi @ymedan, @MachineGenes, @ajchenx, @synhodo, @dollendorf, @SArora, @kenjisuzuki - We would love to hear your thoughts on this topic. Do you think this area of focus is worth the next $10M XPRIZE? If so, please vote for this topic.
The basic idea of “Dr. AI” is sound. However, we have to be absolutely clear on what it means:
It makes no sense to have a general “Dr. AI”. Each team should instead nominate a specific medical/therapeutic target, that obeys basic criteria of “public good” relevance, and design an AI appropriate for that target. (No “general” AI exists; it is highly unlikely it could be made to exist with current technologies; and even if it did, it would be prohibitively expensive to build and train, let alone test in a rigorous, clinically-relevant way that the US FDA would accept.)
Note that diagnostics are a completely different body of work. A diagnostics challenge would be completely different in structure to a therapeutics one. In terms of real-world impact, a therapeutics challenge that creates AI superior to a human medical counterpart is more urgent and in my opinion, ultimately more valuable.
Almost all artificial neural network (ANN)-based implementations of AI are inherently unsuitable for clinical use, due to multiple severe problems: explainability, consequent lack of appropriate real-time interactions with human clinicians, inability to exploit human expertise in an immediate real-time clinical setting, the requirement for “good” training data (suggesting that an almost-adequate therapeutic outcome is present in medical histories), the need for huge datasets, an the problem of drift over time. However there are other forms of AI, superior to ANN-based architectures, that would be well-suited for this challenge and address these issues.
The above issues with AI are critically important in a clinical context. I noticed some contributors above talking about “common sense”. In AI there is no such thing. Which is why medical AI needs to have full explainability. This is achievable, but requires innovation in the core algorithms as well as their medical application.
This is also ultimately why Watson appears to have disappointed. Similarly other ANN-based AI will disappoint-- the problem is in the fundamental algorithm architecture.
Consequently I think a “Dr. AI” topic is an excellent idea. It will force genuine innovation at a fundamental level, rather than teams using commercial off-the-shelf solutions. And $10M is a worthy sum for that level of innovation.
I don’t believe in fully automating the role of the MD because it will be biased and flawed, especially when it comes to rare medical events.
I would advocate for a surrogate AI that will augment not only MDs but also RNs and caregivers, in order to reduce the interaction load on medical staff just for essential interactions and interventions.
I view AI and digital health technologies as empowering humans to stay healthy and resilient via self-care. If it was dependent on me, I would create a challenge related to Intelligent Augmented (IA) Human(Human Health Centric AI). It will remove the need to be patience when it comes to consuming medical care.
Another advantage of such digital tools is in the ability to equalize quality of care across diverse populations and geograpies by dissiminating new evidence-based knowledge and insights at the speed of light.
On this subject: I think Andrew Ng recently gave a lecture where he explained that while AI can have great statistical success at diagnosing patients based on their X-ray images or the likes, once you move the AI to another hospital from the one it was trained in, it just can’t reach the same level of accuracy.
Still plenty of problems to solve in this field, I’m afraid.
Yesterday we had our first meeting with members of the XPRIZE Health Brain Trust. This is a group of eminent thought leaders and visionaries, who will guide the foundation’s work in health, including for Global Visioneering.
One of their suggestions – and I also mention this in the Equity in Health Care discussion – is to leverage AI to surface biases in health care.
Another is to use AI to upskill health workers.
I’m curious how that could work practically and would like to get your thoughts.
Few more points on this from our second Brain Health Trust meeting:
- AI and data help shape and reinforce positive health behaviors.
- Tech/AI should be leveraged to improve health outcomes regardless of income and wealth.
- Necessary data is easily accessible for AI, including psychosocial data.
Not directly answering the questions her, but for me a big question is the following:
Big Data and A.I. are incredibly efficient in finding information online (through Google) even when the question is asked somewhat incorrectly (with typos, with some part of the research not correct…). Why are Big Data and A.I. not really efficient (through IBM Watson for example) in finding good information when questions are not perfect and/or when Data is not perfectly curated? And how can we change this?
Moving this topic into the breakthroughs phase of the Global Visioneering program. We’e also discussed it as such with our Health Brain Trust. I’m exciting to see this move forward!!
Here’s the current description of this breakthrough:
- Doctor AI. Clinical tool that collects and synthesizes all relevant medical research, analyzes electronic medical records, and makes hypotheses and/or recommendations to caregivers.
(It needs to be one paragraph for now. If it’s selected to be developed into a prize idea - and that partly depends on the feedback it receives here - the description will be expanded.)
@NickOttens, I think it’s important to recognize that ‘collection’ of all relevant medical research is different from ‘synthesizes’, which in turn is different from the rest of the description. Many clinicians may be uncomfortable about the ‘collects and synthesizes’ specification for AI, unless it’s done collaboratively, given the poor performance of conventional AI in explainability. And the ‘all’ demands a lot! (Remember that even Watson, with all the resources of IBM, was struggling under the volume of papers published in oncology etc.)
Hence I’d suggest ‘Clinical tool that synthesizes relevant medical research in collaboration with clinicians, then analyzes electronic medical records and makes hypotheses and/or recommendations to caregivers.’
You might also include a requirement for explainable collaboration, as this is the real problem with clinical AI. Something along the lines of ‘These hypotheses or recommendations are to be explainable and interactive, enabling the AI and caregivers to decide a consequent course of treatment.’
So the specification might read:
‘Doctor AI. Clinical tool that synthesizes relevant medical research in collaboration with clinicians, then analyzes electronic medical records and makes hypotheses and/or recommendations to caregivers. These hypotheses or recommendations are to be explainable and interactive with caregivers, enabling the AI and caregivers to decide a consequent collaborative course of treatment.’
Given that different diseases are extremely different in their affinity to AI, I would suggest that the competition allow submissions for a specific disease or medical condition, with the following three criteria:
The relevance of the enhanced treatment of that disease or condition to humanity;
The degree of enhancement to the treatment provided by the AI; and
The scalability of the proposed solution technology to other diseases or conditions relevant to humanity.