Judging Criteria for an Interoperability Prototype and Demonstration

In Round 1, teams write their proposal on how they will design a solution that aggregates and harmonizes health data to address the primary care needs of a country.

In Round 2, teams will actually build an interoperability prototype and test its performance with existing point-of-care systems and against predetermined criteria.

  • How long should we give teams to build their interoperability prototype?

  • What are some illustrative measurement targets we could use as judging criteria to judge prototypes?

  • What are some benchmarks we could use to evaluate things such as

  • Performance consistency

  • Speed of transactions

  • Cost

  • Flexibility

  • Extensibility

  • Workflow fit

Hi @ymedan, @mashizaq, @SArora, @shamakarkal, @addy_kulkarni, @Nitesh, @scveena, @jonc101, @ajchenx, @RahulJindal - Given your vast experience in digital healthcare space, you might have inputs on judging and evaluating interoperability prototype. Share your thoughts on this discussion.

I think all solutions that have a backend connection to EHR/EMR should conform to FHIR.
As for judging, there should be scores given to clinical utility, UX on both user and clinician side, Security and Privacy. This is a minimal set.

I judge interoperability of healthcare application in these areas:

  1. Data interoperability: Use of international data standards such as SnowMed, ICD, CPT, LOINC, RxNorm, HPO, etc. Use of UMLS for connecting common standards. Standard ways to integrate local custom codes (concepts) with the international standard codes. If the application is for multiple languages, the data interoperability also means language independent. Data model is done at concept level, which can be expressed in and inter-operated by any language.
  2. Content interoperability: Use international standards for presenting health information such as medical records. Use of HL7 FHIR or other common public standards. If using private content format, it should follow the standard approach to exchange contents so that the content can be easily understood and consumed by receiving applications.
  3. Knowledge interoperability: If knowledge representation is used for AI or RPA application, it should also follow international standards closely in order to achieve interoperability in knowledge computation.
  4. Exchange interoperability: Use of data exchange standards, such as REST API, event-driven messaging or data streaming.

@Shashi Overall, monitoring activities should be answering this question:
Is the intervention working as it was intended?
Monitoring activities can measure changes in performance over time, increasingly in real time, allowing for course-corrections to be made to improve implementation fidelity. Plans for monitoring of digital health interventions should focus on generating data to answer the following questions, where “system” is defined broadly as the combination of technology software, hardware and user workflows:

  • Does the system meet the defined technical specifications?
  • Is the system stable and error-free?
  • Does the system perform its intended tasks consistently and dependably?
  • Are there variations in implementation across and/or within sites?
  • Are benchmarks for deployment being met as expected?
Effective monitoring entails collection of data at multiple time points throughout a digital health intervention’s life-cycle and ideally is used to inform decisions on how to optimize content and implementation of the system. As an iterative process, monitoring is intended to lead to adjustments in intervention activities in order to maintain or improve the quality and consistency of the deployment.

Thanks @ymedan, @ajchenx and @mashizaq for sharing these insights. We have taken note of all the points. Ideally how much time we should give the team to build their prototype?

Hi @preciouslunga, @jblaya, @synhodo, @poppyfarrow, @Vishalgandhi, @RKadam, @reubenwenisch, @vipat, @kakkattil, @joshnesbit, @dollendorf, @kkatara, @alabriqu - Would love to hear your thoughts on judging and evaluating interoperability prototype.

@Shashi Prototypes are often used in the final, testing phase in a Design Thinking process in order to determine how users behave with the prototype, to reveal new solutions to problems, or to find out whether or not the implemented solutions have been successful.
I think 2-4 weeks is an ideal period of time to build the prototypes. However, this will be influenced by the project timeline. Longer timelines necessitate having an extended duration for prototypes design as there would be no rush to complete the design thinking process.

Hi @Kwenz, @mario_perez, @sjatkins, @bngejane, @emcasey, @janansmith, @skornik, @arun_venkatesan, @Nvargas2, @dykki and @biki - Curious to know if you have any inputs to share with us on judging and evaluating interoperability prototype of a digital health solution and also ideally how much time would be required to build such a prototype.

Hi @siimsaare, @Debbie_Rogers, @JoanneP, @krp, @rajpanda, @ajeeta, @Davisthedoc, @supratik12, @ClaireM, @stephaniel - Ideal how long does it take to build a digital health solution? Is 12 months sufficient? Share your experience.

Ideally it should be more than 12 months.

Thanks @supratik12 for sharing your thought. What according to you should be an ideal timeframe (approximately) to build prototype? You can share some examples if you like for team to understand.

Hi @tylerbn, @yuanluo, @Neal_Lesh, @dzera and @C_Castellaz - How long does it take to build a digital health solution? Is 12 months sufficient? Share your experience.

Hi all,

We have recently done some work with partners on using interoperability solutions to strengthen epidemiological surveillance. Based on that, here are some thoughts–

  1. How long should we give teams to build their interoperability prototype?— depending on the scope of how many systems/devices are in scope, this could range from 1-3 months.
  2. What are some illustrative measurement targets we could use as judging criteria to judge prototypes? — In addition to what’s been mentioned in the comments above, no. of records successfully handled, error/crash rates, usability, could be a few.

Thanks @RKadam for sharing your thoughts. We expect teams to develop and deploy 3-6 solutions for real-life demonstration in a catchment area and the chosen health focus. What should be ideal timeframe to build prototypes in this case?

Also, you could provide feedback on the overall prize timeline here

@ymedan ~ Just great! I love how simply you distilled it down to the minimal set. That’s helpful, especially now when we don’t yet have a specific country determined. It’s great to have some general criteria to work with and we can refine these more within the country context once the country partner is selected. Thank you!

@ajchenx ~ I love your approach to judging interoperability based on its adherence to common/international standards. I think this approach also brings us closer to a vision of sustainable scalability of the solution into other contexts and countries. I noted down the standards you mentioned, so thank you very much for those!

@mashizaq ~ As a project manager, I can really appreciate the focus on monitoring, because we have a similar process in the field of project management whereby we are always monitoring against our project plan and the execution of that plan. I see the parallel when it comes to monitoring the design and implementation of digital intervention. I’m definitely going to share your list with the team! Thank you!

Thanks @HeatherSutton