Book a Demo

Intervention Risk Management Practices

Overview

The Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology (ASTP/ONC), through its Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule, requires certified health IT developers to establish intervention risk management (IRM) practices for AI-enabled functionality.

EHRYourWay has developed practices to manage risks associated with AI-enabled features, informed by industry standards including the National Institute of Standards and Technology (NIST) AI Risk Management Framework. These practices emphasize clinician oversight, vendor accountability, and secure integration of AI services within healthcare workflows.

AI-Enabled Features

EHRYourWay integrates third-party AI services to provide the following capabilities:

  • AI Documentation Assistant – AI-powered clinical documentation supporting ambient listening, telehealth processing, voice dictation, and freeform note cleanup. Clinicians review and finalize all AI-drafted content.
  • AI Chatbot – Natural language voice or text commands for EHR navigation, chart access, eligibility checks, orders, and prescriptions without menu navigation.
  • AI-Powered Voicemail – Automatic transcription, priority classification, and caller detail extraction for voicemails displayed directly in the EHR.
  • AI-Assisted Document Indexing – Automated categorization, patient matching, and configurable routing for scanned documents, eFaxes, and uploaded files.
  • AI-Assisted Check Posting – AI extraction of payment information from remittance documents with automatic matching and posting to client accounts.

These features utilize AI services from OpenAI, Google, Microsoft Azure OpenAI, and Deepgram. EHRYourWay does not train or fine-tune AI models internally; we integrate established AI platforms through secure APIs and apply application-level controls to ensure appropriate use within healthcare workflows.

Risk Assessment and Mitigation

Accuracy and Reliability

AI-generated content may contain inaccuracies that could affect clinical documentation quality.

Mitigation measures include:

  • Clinician review and validation of all AI-generated notes prior to finalizing documentation—users are responsible for verifying content accuracy before signing off on any AI-assisted documentation
  • Customizable prompts in the Documentation AI Assistant, allowing practices to tailor AI behavior to their clinical workflows and documentation standards

Bias in Outputs

Third-party AI models may produce biased outputs based on their underlying training data.

Mitigation measures include:

  • Selection of established AI vendors with published responsible AI practices
  • Clinician oversight of AI-generated content, with the ability to edit or reject outputs before use

Transparency and Explainability

Difficulty understanding AI reasoning can affect clinician trust and accountability.

Mitigation measures include:

  • Clear identification of AI-generated content within the platform
  • Clinician engagement in reviewing, editing, and approving AI-assisted documentation before finalization

Security and Privacy

Unauthorized access or data exposure may compromise patient information.

Mitigation measures include:

  • Business Associate Agreements (BAAs) with all AI vendors handling protected health information
  • Encryption of data in transit and at rest
  • Access controls limiting AI feature availability to authorized users
  • Regular security assessments and vulnerability testing
  • Compliance with HIPAA and other applicable healthcare data protection requirements

Vendor Management

Reliance on third-party AI services introduces dependency and continuity risks.

Mitigation measures include:

  • Selection of established AI vendors with enterprise-grade reliability and support
  • Monitoring of vendor service status and performance
  • Contingency planning for service disruptions

Governance

EHRYourWay maintains oversight of AI-enabled features. Integration of new AI functionality requires review of security, privacy, and operational considerations. When significant risks are identified, proposed functionality is escalated for senior review prior to deployment.

Monitoring and Continuous Improvement

Ongoing monitoring processes are in place to support AI feature performance. These include:

  • Feedback mechanisms for clinicians to report issues with AI-generated content
  • Periodic reviews of AI feature performance
  • Evaluation of AI vendor updates and model changes
  • Incident tracking and reporting procedures for adverse events related to AI functionality