Challenges in Using Dr. AI

In the subdomains thread we discussed Dr. AI and listed several issues relating to using it:

  • ANN based Implementation of AI being unsuitable for clinical use.
  • Data and bias checking.
  • Interoperability of AI subsystems.
  • Accuracy, prediction and robustness of the system.
  • Fundamental Algorithm architecture.

Do you agree with the above initial list of challenges? What else would you add?

Apart from technological challenges, I would love to have some feedback on the social, political, environmental, and economic aspects of the challenges as well.

@alberg, @fabienaccominotti, @eakinyi, @rosiecampbell, @yanchen, @akastner, @key2xanadu, @Ligia_Nobrega, I’d love your input on this from a data perspective.

We want to understand the challenges of implementing AI in health care better in order to help the Community decide if this is an area where we could potentially design an XPRIZE competition.

Hi @MachineGenes, @NellWatson, @kenjisuzuki, @reubenwenisch, @biki, @ukarvind, @BrendaMurphy, @erickson, @bwilcher, @AshokGoel, @Anita, @jmossbridge - Would love to hear your inputs on the biggest challenges in implementing AI in healthcare? Thanks.

I would agree with the analysis of the challenges, and I would include challenges of explainability of predictions or recommendations also as another important difficulty to be overcome.

1 Like

Agreed; impt issues.

A few more: there is a need for

  1. assessing interpretability of AI systems in healthcare
  2. interoperability (how to connect AI component in multifunctional pipelines)
  3. rubrics for human-AI interfacing (building aligned human-AI teams)
  4. creating model scorecards to compare different models (their architectures, performance, credibility/veracity)
  5. measuring inferential quality via uncertainty measurements. Making sure predictions are calibrated so as to be able to compare & combine predictions from different models.
  6. technologies for reliable & robust federated learning
  7. Model security (preventing attacks and compromise in deployed AI systems)
  8. Rubrics for regulatory and compliance assessment of AI workflows
  9. Perhaps the most important: education and training so that AI developers and deployers speak a common language to understand &meaningfully calibrate expectations of AI systems
  10. Legal aspects: how to you assess AI system error in decision making and who is responsible for an AI-driven mistake (developer, deployer, decision maker)?
  11. Ethics of AI (disparities across under-privileged groups etc)
3 Likes

Fantastic summary, @ukarvind! Are you involved in the AI healthcare field?

Thanks Arvind for sharing these challenges. As you feel this is an important issue - please vote for this challenge (find the vote button in the upper-left, next to the discussion title).

Hi @sarahb, @erickson, @scveena, @Sujana, @synhodo, @acowlagi, @dzera and @Shabbir - You might have inputs to share on what are some of the biggest challenges in using AI for healthcare.

Hi @Roey, yes. Very interested in processes for the effective deployment of AI in healthcare.

1 Like

Hi @MachineGenes, @ukarvind, @NellWatson, @sarahb, @akb, @AquaDoc - What are your thoughts about Trustworthy AI - Does it act as significant challenge for successful use of AI in healthcare?

AI is currently demonstrating outstanding performance in some aspects of the health and science sectors, and significant developments continue. I envisage that future AI will go on to make fantastic contributions to these sectors.

Good suggestions are proposed in the above post and the associated comments.

As trust in the ability of AI increases we need to be careful not to blindly trust it 100% of the time - it is important that AI indicates its level of confidence, so that when AI is unsure a professional can be alerted to pay specific attention to the case in question. Similarly, for potential bias (e.g. when the data-set is weaker for dealing with some scenarios). As stated above, an explanation of why an AI produces each prediction and/or recommendation will be useful.

It’s worth remembering that correlation does not (always) imply causation. The best of today’s AI is very good at the former, and poor at the latter. It can identify patterns in data and use these to make predictions or a diagnosis - but it does not (yet) understand the underlying processes that cause things to happen.

In terms of social impact, this means that we should be very careful when AI is involved in complex life changing situations. For those a combination of multiple (independently developed and trained) AI systems and humans might be the best approach. We don’t want to follow the hypothetical scenario of the Minority Report.

AI also lacks “common sense” - or wide ranging experience of most aspects of the real-world. A system that has excellent abilities in a specific narrow field is still susceptible to making mistakes, when a real-world event outside of its knowledge has a significant impact on the scenario in question. This is why general purpose AI systems (or subsystems or AI networks) could help to make AI more robust. Interestingly, at least one company has announced this week that it is making a significant effort to develop a general purpose AI, with the aim of exceeding the ability of experts. Such an audacious goal sounds like a potential XPRIZE challenge to me :slight_smile:

3 Likes

LG strives to build general-purpose AI

1 Like

Thanks for sharing your thoughts.

Hi @techspeaker, @nastyahaut, @bjcooper - As a tech expert, you might have inputs to share on this topics. What are your views on bias and neglecting social norms (as it leads to decisions that are technically correct but socially unacceptable) being the key barriers to trustworthy AI?

Health systems in Africa face several structural challenges. National medical systems often suffer from shortages of qualified healthcare professionals or supplies, resulting in divergent outcomes for patients depending on the facility and service that they need. In addition to accessibility barriers and rural and urban disparities, lack of awareness on health issues can be a barrier to seeking care, to
receiving more effective treatments, and more effective public health policies. Even when facilities and staff are available, affordability can put needed services out of reach of patients.

AI can help plug these gaps and enhance outcomes, and large corporations and startups alike are developing AI-focused healthcare solutions for these challenges.

https://info.microsoft.com/ME-DIGTRNS-WBNR-FY19-11Nov-02-AIinAfrica-MGC0003244_01Registration-ForminBody.html

1 Like

Thanks @mashizaq for sharing your thoughts on gaps in African health systems. You might further like to add more on the barriers faced in efficient implementation of AI in healthcare system. For e.g. accuracy of the AI System leading to trust issues.