Healthcare is facing renewed pressure to improve interoperability, as federal agencies ramp up efforts to combat information blocking. The push includes promoting an Interoperability Framework, expanding the United States Core Data for Interoperability (USCDI), and increasing accountability among providers and technology developers. In this context, industry leaders are exploring innovative approaches like “conversational interoperability,” allowing clinicians to use natural language to interact with electronic health records (EHRs) and retrieve necessary information instantly.
While the potential of new technologies, especially artificial intelligence (AI) and large language models (LLMs), raises hopes for a simpler interaction with EHRs, historical trends suggest that excitement may outpace practical applications. Previous initiatives, from early vocabulary standards to Fast Healthcare Interoperability Resources (FHIR), have promised transformative changes but have often faltered due to a critical issue: the lack of clean, structured, and clinically valid data.
The Challenge of Data Integrity
The concept of conversational interoperability may gain traction in the coming months, particularly as AI-driven interfaces demonstrate their capabilities. This approach is appealing because it aims to alleviate the challenges clinicians face when navigating complex EHR systems. However, AI can only utilize the data that is available in the records. If this data is incomplete, unstructured, or inaccurate, the results of any natural-language query will also be flawed. Consequently, poor data leads to ineffective conversations.
The limitations of LLMs further complicate the situation. These models can produce confident yet incorrect responses, and they require substantial computational resources. Without structured inputs, the risk of amplifying existing gaps and errors increases. Vendor demonstrations may appear impressive, but real-world applications reveal the fragility of systems reliant on weak data foundations.
Most healthcare data remains unstructured, with vital information about symptoms, treatments, and patient context often buried in free-text notes or scattered across various systems. This lack of accessibility hampers clinicians’ ability to obtain comprehensive views of their patients, undermining both care quality and safety. While standards such as FHIR offer mechanisms for data packaging and transmission, they do not guarantee that the data is clinically meaningful. Often, FHIR serves as a conduit for inconsistent or incomplete information rather than ensuring usability.
Advocating for a Universal Medical Coder
To address these challenges, the development and implementation of a universal medical coder is proposed. This system would translate clinical concepts into structured, standardized, and contextually accurate representations at the point of care. By effectively mapping free-text inputs and unstructured documentation into consistent codes across various vocabularies, including International Classification of Diseases (ICD), Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and Logical Observation Identifiers Names and Codes (LOINC), such a tool could enhance regulatory compliance and billing efficiency.
The true value of a universal medical coder lies in its ability to create a robust clinical data foundation. By capturing relevant concepts in real-time during the clinician’s workflow, it ensures that data remains accurate, complete, and interoperable across different systems. This improvement would allow interoperability frameworks like FHIR to fulfill their promises, as the data within these frameworks would be as usable as the frameworks themselves.
Healthcare leaders are encouraged to avoid chasing the latest buzzwords as definitive solutions. While conversational interoperability is an intriguing concept, it should be viewed as one component of a larger system. The primary challenge persists: the industry must prioritize data integrity and fidelity before advanced applications, such as predictive AI or population health analytics, can achieve lasting impact.
As the healthcare sector embraces innovation and enthusiasm, it is crucial to maintain a realistic outlook. Impressive technology demonstrations should not distract from the fundamental work required to build structured, clinically valid datasets. Policymakers, vendors, and providers must acknowledge that true interoperability cannot be achieved through user interfaces or standards alone. Instead, it is realized when every patient encounter results in usable, exchangeable, and meaningful data.
The ongoing movement towards interoperability in healthcare is both necessary and overdue. Increased regulatory enforcement against information blocking, the expansion of the USCDI, and industry innovation are vital steps forward. However, without prioritizing structured, clinically valid data as the cornerstone of these initiatives, the full potential of interoperability will remain unfulfilled. The emergence of concepts like conversational interoperability underscores the opportunities and risks present in the current landscape. While these trends may enhance usability, they cannot compensate for inadequate data quality. A universal medical coder, consistently applied across care settings, presents a practical solution to the persistent issue of data integrity. Addressing this core necessity is essential for healthcare to transcend cycles of over-promised innovations and realize the vision of truly interoperable, patient-centered care.
