June 23, 2020 | Austin, TX

Monitoring and Evaluating AI: Challenges and Practical Implications

In Conjunction with the SIIM 2020 Annual Meeting

What to expect?

Monitoring and evaluating AI continue to create challenges for AI development and adoption. Join us to learn how you can evaluate algorithm performance and monitor AI at your institution. 

  • Examine issues such as brittleness, bias, fairness and generalizability related to AI evaluation
  • Metrics for evaluating AI performance
  • Tools and methods for AI monitoring

The Summit brings together thought leaders and attendees for a robust discussion and evaluation of where we are today and what you can expect in the future. Whether you are a developer, looking for insights from radiology leaders, or a radiologist with an informatics background seeking best practices, the Summit can help you understand best practices to evaluate AI models and provide strategies to overcome these potential barriers to AI adoption.

Earn 6.5 CME or Category A Credit*

Who attends?

  • Industry partners and developers interested in pursuing data-sharing arrangements
  • Clinical informaticists and fellows, radiologists, residents, enterprise IT, clinical applications professionals, technologists
  • All those who want to learn more about AI evaluation and ongoing monitoring

Course Overview

  • Full-day focused content
  • Brief keynote presentations
  • In-depth panel discussions
  • Conversations with faculty and experts
  • Opportunity to forge long-term relationships with peers

Course Objectives

  • Identify phases of the AI lifecycle
  • Explain hurdles and steps to regulatory clearance
  • Define evaluations of AI and common issues including bias, brittleness, and fairness
  • Cite the tools and resources available to aid in the review and evaluation of AI models
  • Explain the steps of algorithm assessment and validation
  • Outline strategies for evaluating performance and monitoring AI algorithms