You have requested access to member-only content.

Bigger isn't always better

Do you recall the recent humorous television commercial for phone services that featured children who wanted more and tried to explain why? The core message was that more isn't always better. I believe there are many applications of this principle in healthcare. To understand why this is the case, since large evolves from small, you might have to engage your sense of recall to visualize the past compared to the present. We'll look at some examples below.

Big (bad?) data

For all the talk about population health and big data, there is less discussion about data integrity, a key principle in data usage. Anyone who has worked with the most basic of databases, the master patient index, knows how many errors occur in collecting up-front patient access data. Errors still abound in duplicate medical record and account data. How can any of the data associated with these accounts be considered valid and worthy of basing conclusions upon? How confident are we, really, in our interpretation of this data?

For example, comparative MedPAR data will not display ICD-10-CM/PCS data until at least 18 months after ICD-10 implementation. There is no way to measure if we are undercoding, overcoding, erroneously coding, or problematically grouping any cases until we have enough data to make some judgments. Even then, the only true audit is one that compares the collected data with the source documents (in this case, the medical record). Organizations must conduct multiple rounds of these audits before findings can even be discussed.

The best approach is to begin your own audit of small segments (e.g., most common, most at risk) of diagnoses and procedures rather than waiting until the MedPAR data arrives. Be aware that if you are looking at any comparison databases, there is likely a crudely mapped comparison going on between ICD-9 databases (and ICD-10). As we all know, comparisons are not possible in all cases, and the more cross-mapping we do, the less granularly correct the comparison outcome data is, which decreases the validity of the universe of data.

In HIM, there are other data quality issues that have an unknown impact on integrity comparisons. For example, are we comparing apples to apples for sites that are using computer-assisted coding applications versus those that are not? Is it fair to compare outsourced coding with in-house coding? In a recent study conducted for a client, I observed that the time for coding of outsourced cases was dropping in a direct ratio to the case mix. Are we gaining productivity but sacrificing quality and reimbursement potential?

This is an excerpt from member-only content. Please log in or become a member.

Not a member? Join now!

Revenue Cycle Advisor is the key to your organization's Medicare regulatory news and education. It combines all of HCPro's Medicare regulatory and reimbursement resources into one handy and easy-to-access portal. News is not just repeated from other sources. It is analyzed by our Medicare experts so professionals can comprehend any new rule updates thoroughly.

For questions and support, please call customer service: 800-650-6787.