Today’s Challenges With Pursuing Interoperability Perfection
Thanks to ongoing progress in the interoperability of medical information across US health care, clinicians can increasingly view patient data from other sites of care in their local electronic health record (EHR)—a huge leap forward. Unfortunately, these data typically exist separate and apart from the local data in their EHR—effectively co-locating but not combining different medication lists, problem lists, laboratory results, and so forth. When data are viewable but not combined, clinicians are less likely to use data from outside sources; cognitive effort is spent marrying local EHR data with outside data (for example, moving between two or more problem lists, medication lists, or encounter lists); and time is consumed manually reconciling outside data with local data. Furthermore, unreconciled outside data typically cannot be included in clinical notes or used to drive decision support.
While it may seem the remaining work to achieve data integration and realize user-friendly interoperability is minor, the current reality is far from it. The pace of progress is painfully slow because formal policies and informal norms favor perfection over pragmatism in how data from different sources are treated. If we want clinically functional interoperability, clinicians must become more involved to promote pragmatic decisions about when and how to combine data across sources.
The chasm between data availability and data integration exists because we lack the breadth of data standards needed, and the full implementation of the standards that do exist. As a result, much electronic health data is exchanged today either as plain text or discrete data unique to the source system. For clinicians, in both cases, this means that you can see and read these data, but you cannot effectively use them as you do data native to your EHR.
This is true even for data types that seem very uniform, as illustrated by the case of lab results. There is a mature and widely accepted data standard (LOINC) that is a clinical terminology to represent lab test orders and results. However, in practice, LOINC has not been fully implemented, such that many lab tests and lab results found in a given EHR do not include the associated LOINC identifier, particularly in EHRs and other information systems that predate the development of LOINC (or have inherited data from older systems). Even if we were to solve this large-scale, costly issue by fully implementing LOINC across all EHRs, the true Achilles’ heel is that LOINC is insufficiently specific to achieve full standardization: Tests with identical LOINC identifiers often differ across other dimensions, such as testing methodology, machine calibration, or reference ranges. Given the cost and timeline to develop standards, we are unlikely to ever have standards of sufficient descriptive precision to ensure that every test result is truly identical.
For similar reasons, manual efforts to translate across all possible combinations of dimensions—mapping lab equipment manufacturer, units of measurement, and reference range, across the thousands of different lab tests, and variations within each test—are a non-starter. Even efforts to map common tests are costly at scale. At our health system (UCSF Health), we manually mapped a broad swath of clinic lab and radiology orders and results with one partner community hospital at a cost of more than $1 million in expert labor. However, as each lab changes its assays over time, these mappings must be maintained at considerable ongoing expense, or they quickly become out of date.
These realities leave us to face the fundamental question—do these differences matter? For example, how different can a hemoglobin A1c result be from one lab to another? To date, the answer is “different enough” such that current practice and EHR configurations overwhelmingly decide to not treat HbA1c results from different labs as equivalent. Specifically, HbA1c results from outside labs are allowed “in” the EHR but not combined with HbA1c results from the inhouse lab. This favoring of perfection over pragmatism prevents clinicians from trending results over time across institutions and from importing outside labs into a local record with the appropriate context. These are two huge boulders on the road to clinically functional interoperability.
Envisioning A More Pragmatic Approach To Data Integration
It is time to revisit the decision of prioritizing perfection over pragmatism to shift to an approach to integration that achieves clinically functional interoperability. The key to success in the pragmatic approach is differentiating two scenarios: when the same test from different laboratories is sufficiently clinically different that the results cannot safely be treated and trended as equivalent, or when the distinctions, while genuine, are not clinically meaningful.
Examples of scenario one: labs with significant differences in methodology (for example, immunoassay versus mass spectroscopy) such that performance characteristics need to be kept separate. Prostate-Specific Antigen, for example, should generally not be trended across labs because the performance characteristics can be different to a degree with practical significance to patient care. Troponins and many hormone lab assays such as testosterone are other examples. Examples of scenario two: a hemoglobin level, or an LDL-C, or a creatinine. While they might technically differ, in the vast majority of scenarios they will be sufficiently equivalent measures that they can be considered clinically interchangeable and any clinician would feel comfortable trending results from different labs even with minor variations in methods or reference ranges.
The key question is who should be the arbiter in the pragmatic approach? We suggest that specialty societies define the guidelines for their commonly used lab tests, describing when different data sources can be integrated and when they should be kept separate, which EHR vendors can then implement. This approach would counterbalance, and perhaps standardize, current involvement of lab directors who, under College of American Pathologists guidelines, approve the approach to reporting outside lab results in their home institution. More broadly, this approach has precedent in federal programs where specialty societies participate in defining quality measures relevant to their specialty.
While more pragmatic guidelines are likely to substantially advance clinically functional interoperability, there are risks to closely track. Integrating more, but not all, types of lab results could create inconsistency in what clinicians see and experience, risking confusion and missed information. In addition, perspectives on which differences are clinically meaningful may differ by specialty for the same lab test. The detailed performance characteristics of a thyroid stimulating hormone assay may not matter to a primary care doctor but may matter greatly to an endocrinologist faced with a rare case, for example. We lack a framework for how to balance the risks, benefits, and financial costs of integrating data at varying levels of customization. Instead, we default to the most conservative approach that likely overweighs these risks and undervalues the tremendous potential gains from even modest increases in data integration. Without clinically functional interoperability, a provider may not notice an important trend, such as a gradual decline in kidney or liver function, or must spend significant time manually tracking the data. The result: missed clinical findings alongside provider inefficiency and frustration.
Many Interoperability Needs Would Benefit From A Pragmatic Approach
While our argument uses lab results as the focal example, the problem and the pragmatic approach to solving it extend to other key types of clinical data. The problem of interoperability of prescription or inpatient medications is similar—for example, in which cases is the same pharmaceutical produced by different manufacturers equivalent? In other clinical data domains, the challenge is even more daunting because terminology standards (such as LOINC) do not yet exist or are not widely accepted. For example, the United States Core Data for Interoperability, (USCDI) the set of data classes that federal regulations mandated for information exchange, provides a high-level, eight-item typology of “Clinical Notes.” However, installed EHRs often have legacy typologies unique to a site, and mapping these site-specific typologies to the USCDI is complex, local, uncertain, and unregulated. Here again, specialty societies are well-positioned to guide approaches to typology development that favor clinically functional solutions.
Fulfilling the promise of interoperability requires charting a new pragmatic approach that recognizes the limitations of standards and increasingly engages clinicians in driving decisions about how to deliver functional solutions. It will not get us to perfect interoperability, but we will end up a lot closer to good.
The authors wish to thank Ed Thornborrow for his review and feedback on this post. Julia Adler-Milstein is on the board and is a shareholder in Project Connect. She is an uncompensated adviser to CommonWell Health Alliance. Aaron Neinstein has received research support from Cisco Systems, Inc., and the Commonwealth Fund; has been a consultant to Steady Health, Medtronic, Eli Lilly, Roche, Intuity Medical, Nokia Growth Partners, WebMD, and Grand Rounds; has received speaking honoraria from Academy Health and Symposia Medicus; and is an uncompensated medical adviser for Tidepool. Russ Cucina is a consultant to DaVita Kidney Care, Inc., an uncompensated adviser to Doximity, Inc., and an uncompensated member of the board of directors of Carequality, Inc.