The cost and risk of errors and duplicates in the enterprise master patient index (EMPI) is gaining increased visibility across the industry. In fact, this topic recently caught the public’s attention with the filing of a lawsuit in excess of $1 billion that alleges the potential harm of millions of patients due to deficiencies in the way patient information was managed, identified and displayed in the EMR system.
Furthermore, a 2017 Black Book Research survey of 1,392 health technology managers that focused on the challenges and successes in their respective organizations’ patient identification processes, found that roughly 33% of denied claims were because of inaccurate patient identification. This cost the average hospital $1.5 million in 2017. The survey also found that the average cost of repeated medical care due to inaccurate patient identification with a duplicate record is roughly $1,950 per inpatient stay and more than $800 per emergency department visit.
As the healthcare industry advances in information governance best practices and transitions over to value-based care, the strategic importance of data quality and integrity is becoming more and more apparent. With no doubt that EMPI integrity is of upmost importance, the question becomes what are healthcare providers doing to address this challenge?
Many providers have devised data entry and registration quality controls and deployed processes to prevent new errors from being entered into the system. The maturity and adoption of these processes requires ongoing optimization and oversight, however, the foundational framework has been put in place. But what about the data already in the system? Most commonly, providers use one of two strategies to identify existing errors and duplicates:
1) leverage EMR-embedded report modules
2) deploy third-party EMPI analytics software
The challenge here is that the accuracy of the analytics is only as good as the algorithms or logic that lies behind the software. One of the more critical components to enabling EMPI accuracy is to conduct proper due diligence on the algorithms behind the analytics to ensure they deliver a comprehensive view and level of accuracy required to achieve best-in-class error rates.
The good news is that conducting that diligence is not as complicated as it sounds. In fact, with just three questions you can begin to evaluate the effectiveness and accuracy of your EMPI software:
Is my software or reporting module running a probabilistic or rules-based algorithm?
Algorithms are the foundation for any analytics software. Using the wrong rules — an incomplete set of requirements or rules existing outside a probabilistic lens — will significantly and severely hinder your ability to accurately identify duplicates and errors in your system.
How are values weighted?
Values determine what fields identify duplicates and what scale will be used to determine duplicates eligible for auto-merge and those that require manual review. How values are weighted is important, to say the least.
How are duplicates counted?
How duplicates are counted is crucial to effectively managing and efficiently resourcing your EMPI cleanup work streams. It’s also critical to calculating your error rate.
Of course, just as important as asking the right questions is being armed with the insight required to evaluate and compare the findings. For more information on EMPI analytic best practices and what to look for in responses to these questions, download the full white paper here.