The Queen Mary Gangplank: Solving The Risk Adjustment Data Problem

Author:
Dean Stephens
Date:
May 9, 2016

Like many of you, when I’m not eating, drinking and breathing healthcare, I have been riveted by the U.S. presidential primaries. I got to thinking – wouldn’t it be great to apply the same risk adjustment mechanism we use in healthcare to American Democracy? We’d score the relative impact of the outcome of the election on each citizen and then risk-adjust their vote. Those projected to be more impacted by the election get more votes, and those projected to be less impacted get fewer votes. It would be like the concept of Super Delegates in the democratic primary, but instead of party royalty getting more of a say in who gets elected, that additional weighting would go to those citizens most in need of a voice.

While I don’t think we’ll see risk adjustment in American Democracy anytime soon, it is critical for U.S. healthcare. And although I don’t need to tell this audience that risk adjustment is important, it is helpful from time to time to step back and ask why it’s important.

A customer of ours – a primary care physician in Houston – tells a story about a doctor he knows. For Open Enrollment in the days of Medicare Plus Choice (before CMS implemented risk adjustment), this doctor used to rent out the Queen Mary, a ship docked in Long Beach, California. Why would he go to the expense of renting the Queen Mary? “The gangplank,” the doctor would reply. “You know, the moveable ramp used to get from shore to ship. It’s difficult to get across that thing, and they don’t allow wheelchairs. I want to treat those patients who can get across.”

It’s ingenious. It’s discriminatory, but it’s ingenious. It’s a way to adjust. It’s exactly the adverse selection CMS was addressing when it implemented risk-adjusted reimbursement – pay more to plans and providers to care for those patients who cannot get across the gangplank.

So, we all focus on optimizing risk-adjusted reimbursement. However, we need to keep in mind that while critically important, risk-adjusted reimbursement is not the goal but rather a means to an end. The end is risk-adjusted care. The goal is to stratify – to de-average – the patient population. The goal is to accurately determine the disease burden of every single patient and ensure we have the appropriate level of resources to provide the care needed to drive the best outcomes for each patient dealing with illness. And the goal is also the inverse: to provide wellness and preventative care to the healthy so they de-utilize health resources.

So, how are we doing? Judging from some recent ACO statistics, we have a long way to go. While ACOs are getting pretty good at reporting their quality measures, only one-quarter of ACOs in Medicare Shared Savings Plans are sharing in the profits. To improve the situation, we need to risk-stratify our patients. To do that, we need to get a complete picture of the health of each patient. We need a modern Queen Mary gangplank. And that gangplank is data.

The good news is we now have access to lots of digital patient data. The bad news is using this data for risk adjustment is no longer a human-scale problem. What I mean by that is two things:

  • First, normalizing disparate data – reading through dense charts for evidence and mapping large data sets – are not what the human brain is best-suited for;
  • Second, throwing more people at the problem does not scale costs effectively.

 Luckily, there are now technologies available to address these data analytics issues – not to replace people, but to enhance them. These tools let doctors, nurses and coders practice at the top of their licenses and certifications to improve effectiveness and efficiency.

Here’s a quick look at the kinds of data we can now access and the technologies available to use them for more accurate risk adjustment:

  • Historically, risk adjustment was done with claims data. Claims data is valuable, stable and longitudinal, but it’s just a subset of of the patient profile. It’s like looking at a Pointillist painting. You look at a work by Seurat, and you can see the outline of the patient, but when you get close you see all the gaps in knowledge. Also, claims data’s time lag makes it difficult to use at the point of care.
  • Structured clinical data is getting easier to access with an increasing focus on interoperability. But interoperability is analogous to laying undersea fiber to connect the U.S., France and Germany. When the Americans, French and Germans get on the conference call, they still can’t understand each other because they speak different languages. What’s required is a taxonomy – a Rosetta Stone – to normalize the data.
  • Much of the valuable patient data resides in unstructured formats – free-text physician notes, transcriptions, problem lists, etc. Some estimate unstructured data represents 60-80 percent of the patient profile. To get at this treasure trove of information, you need very robust clinical natural language processing (NLP) or natural language understanding technologies.
  • At the highest level, especially for healthcare providers, the real value is to use all of this data together to perform suspecting and prospective risk scoring. To do this effectively requires an inference engine, using sophisticated health graphs and taxonomies, to identify and weigh evidence for potential risk opportunities.

 The key takeaway here is that there is no silver bullet. Computing power in the form of NLP and cognitive approaches are pieces of the puzzle. But by themselves, they are akin to taking a smart person off the street and having them code medical charts versus getting a smart person off the street who also happens to have attended medical school. To create the modern data version of the Queen Mary gangplank requires an integrated approach marrying powerful computing power with robust medical knowledge.

Dean Stephens is the CEO of Talix.
View all Blog Posts

1

2