UNDERSTANDING AI STANDARDS IN HEALTH IT
AI capabilities such as decision support, summarization, image analysis, and automation are rapidly becoming embedded within EHRs, health information networks, and consumer-facing applications. In parallel, a new generation of AI-focused interoperability standards is emerging that emphasizes reusable, well-defined building blocks for safely connecting AI agents to clinical data, workflows, and tools, rather than bespoke interfaces for each use case.
As with API-based interoperability, these AI standards seek to enable composable architectures in which common services (for example, data access, identity, consent, and logging) can be orchestrated to support many different AI-enabled workflows without starting from scratch each time. However, because AI systems can autonomously interpret data and trigger actions, AI standards must also address additional dimensions such as explainability, risk management, auditing, and guardrails around model behavior.
Role of HL7® Standards in AI-Enabled Interoperability
HL7® standards, including HL7 Version 2, HL7 C-CDA, and HL7 FHIR®, continue to provide the core “language” for health data exchange that many AI solutions depend on. For example, AI applications often consume or produce patient demographics, encounters, problems, observations, medications, and procedures using HL7 FHIR resources and implementation guides (e.g., US Core profiles), enabling them to participate in workflows such as clinical decision support, quality measurement, and prior authorization.
These HL7-based standards and APIs function as foundational services that can be reused by multiple AI applications, consistent with the building-block approach described for API-based interoperability. Implementers must still define which specific HL7 resources are required, what value sets and profiles are used, which operations are permitted (for example, read-only vs. write), and how security, consent, provenance, and audit are managed for AI use cases that may involve large-scale, longitudinal, or real-time data access.
Model Context Protocol (MCP) as an Emerging AI Integration Standard
The Model Context Protocol (MCP) is an emerging open standard that defines how AI tools and agents can connect to external tools, services, and data sources. MCP standardizes a JSON-RPC pattern for requesting context (“resources” and “prompts”), invoking operations (“tools”), and returning structured results across standard transports such as stdio and HTTP.
In healthcare implementations, MCP can be layered on top of HL7 FHIR interfaces to structure tool calls (e.g., FHIR reads, searches, and write proposals) while enforcing clinical safety constraints, human-in-the-loop validation, and traceable handoffs between humans and AI. Identity, authorization, and delegation are still evolving across the ecosystem (including community extensions such as MCP-I) and active discussions are underway to align these standards with HIPAA, FDA software-as-a-medical-device (SaMD) guidance, and other policy requirements.
For multi-agent scenarios, agent-to-agent standards such as A2A can complement MCP by enabling agent discovery (e.g., Agent Cards), capability negotiation, and task-oriented collaboration across organizational boundaries. These standards too hold promise in the healthcare context.
Key Implementation Considerations for AI Standards
As with API-based interoperability, successful AI-enabled interoperability requires implementers to specify and constrain multiple layers of the stack for a given use case. When deploying AI systems that rely on HL7, MCP, and emerging agentic standards, stakeholders may need to address at least the following:
- Data scope and resources
Implementers should define which HL7 resources and data elements are accessible to AI components (for example, limiting to certain FHIR resource types, profiles, or historical time windows) and which code systems (such as LOINC, SNOMED CT, or RxNorm) are in scope for AI processing.
- Operations and permissions
It is important to specify whether AI systems can only read data, or may also create, update, or suggest changes, and to constrain MCP tool calls accordingly (for instance, allowing an AI agent to propose an order that still requires human sign-off).
- Transport and orchestration
AI components commonly use HTTP-based transports and existing HL7 FHIR RESTful APIs, but MCP orchestration can introduce additional routing and coordination layers between AI agents, EHRs, and ancillary services (including event-driven triggers and subscriptions).
- Identity, security, and trust
OAuth2/OpenID-based identity, TLS encryption, and comprehensive audit logging are typically recommended so that AI requests can be tied to specific users, agents, and organizations, with clear records of data access and actions taken. Trust frameworks and governance agreements may be necessary to define how health systems, AI vendors, and content providers establish, monitor, and revoke trust relationships.
- Transparency, privacy, and patient expectations
Given concerns about secondary use and sharing of sensitive health information, implementers should ensure that AI deployments include clear privacy notices, transparency mechanisms for how data and outputs are used, and controls to limit redisclosure or profiling beyond the intended clinical or operational purpose.
Looking Ahead
As AI capabilities advance, standards such as HL7 FHIR, MCP, and A2A are likely to evolve in tandem, with new profiles, implementation guides, and best practices focused on safety, governance, and observability—including work like AI Transparency on FHIR to document AI-influenced data and processes. Stakeholders deploying AI in Health IT should monitor emerging work by standards development organizations, regulators, and industry collaborations to ensure that AI solutions remain interoperable, trustworthy, and aligned with evolving policy and regulatory frameworks.
