Expert Advice, Articles & Blogs XiFin EXCELLENCE

Hijacking the Concept of Value Based Pricing

July 1, 2012

Buyer beware. The promise of value based pricing is becoming synonymous with bundled pricing and utilization control as Healthcare Reform stakeholders are incentivized by short term gains to tackle a fee-for-service system that rewards provision of services rather than achievement of outcomes. In the race to contain healthcare costs, whether through the incentives afforded ACOs or the current pressure on payors, it is alarming that short term gains could jeopardize true health care reform that promises better patient care at lower cost. Transformative technology has put us on the precipice of the most significant paradigm change in the delivery of healthcare that will revolutionize patient care while finally aligning the economic interests of the patient with that of the provider and the payor. Technology also solves the communication gap that is necessary to facilitate a collaborative effort in the fragmented healthcare community to eliminate duplicative services while optimizing outcomes. Providers, patients and payors can collaborate in real time at the point of service to make sure the patient receives quality care that is both appropriate and timely, but will payors slow down the long term gains by focusing on short term incentives?

Payors have traditionally driven cost down by attempting to control utilization and reducing reimbursement rates. Financial incentives through bundled payments and utilization control have had marginal success, but at the cost of quality. Marrying financial incentives to quality has a better chance of success, but this formula requires longitudinal research and does not yield itself to short term solutions. Under the rubric of “decision support,” first Rx formularies, followed by guidelines for inpatient stays, were developed. Formularies for approved drugs were created without consideration for effectiveness and tolerability, ultimately resulting in reduced patient access to drugs. In the early ’80s, Milliman & Robertson, a Seattle based actuarial firm, began developing decision support guidelines. The spread of managed care hastened the adoption of these guidelines in hospitals. Over years and after much controversy over short inpatient stay guidelines, lawsuits and legislation re-shaped Milliman’s development of decision support from consensus based methodology to one that is evidence based. In fact, Milliman developed a hierarchy of guidelines, ranging from randomized control trials as the gold standard, to observational research published in peer reviewed journals, to the lowest value of unpublished evidence and quality improvement projects. A number of payors still use these guidelines in Technical Assessment programs to assess rapidly developing modalities for which they are no longer as relevant. These rapidly developing modalities require more nimble assessments as indicated in a recent meeting of the Personalized Medicine Coalition’s Legislative Affairs Committee by Dr. Joe Selby, the Executive Director of the Patient Centered Outcomes Research Institute (PCORI), when he referenced that “anecdotal evidence” was one of the tools that they would be utilizing to aid in their assessments.

The reality is, it is not easy or expedient to meet the highest standards of randomized control trials, and incentive programs that reward short term gain which may be misaligned with the long term benefits of personalized medicine. Milliman’s major competitor, InterQual, a division of McKesson, uses consensus methodology in which a panel of physicians develops guidelines referred to as clinical decision support or clinical appropriateness criteria. InterQual is used by thousands of hospitals for patient care and hundreds of health plans for retrospective payment review, including CMS.

Solucient, the third key player in the development of clinical criteria, pairs the InterQual guidelines with its own length of stay data. Solucient does not provide guidelines, but rather tracks patterns and publishes actual data from various databases drawn from over 2,000 hospitals. While this data is generally used as a benchmark, in a class action law suit, one payor was charged with making wrongful use of the guidelines for financial and medical necessity decisions.

These three key players in the medical decision support space have traditionally focused on hospital length of stay criteria and weathered suits and legislative change to better shape the quality and proper use of that data which has been the primary tool of the payor seeking to control cost. This is the historical way in which the use of a collection of data, whether analyzed statistically or by medical professionals, was modified when the data itself is not specific to patient circumstance and the analysis is not semantic in nature. The data are not used to determine whether a patient should have a hospital stay, because while the reason for a hospital stay and the quality of that stay represent a critical decision in patient care, the actual incident of the stay by itself does not represent a long term change to either the patient’s quality of care or health. Thus controlling the quality and length of the stay through use of actuarial data to determine benchmarks and guidelines may make good economic and medical sense.

With the exception of Medicare, in the payor world, 20% of policyholders switch insurers each year. A large regional insurer calculated that its annual turnover may run as high as 30 percent, far too high to make back the cost of pre-emptive disease management or other front-loaded investments in good health. Payors are not incentivized to make long term economic decisions based on healthcare, but rather to achieve economic rewards in the short term. Formularies, utilization control and reimbursement compression achieve these goals.

When decision support tools enter the realm of personalized medicine though, there is a paradigm shift. Laboratory testing, whether clinical or genetic in nature, represents less that 2% of the healthcare spend, but influences over 70% of all patient management decisions. The cost of controlling utilization and reimbursement for laboratory testing may not be immediately evident, but limiting or eliminating necessary testing results in an explosion of healthcare expenditures if a symptom or disease is not promptly identified or if patients are put on therapies that do not work for them. Improper control of laboratory testing at once increases healthcare costs while jeopardizing the health and sometimes the life of the patient.

Over utilization of lab testing is estimated at 17% or 0.03% of the healthcare spend, while underutilization has been reported to be 38.3% in a study published in the New England Journal of Medicine in 2003. Moreover, the cost of under utilization far outweighs the actual cost of the tests themselves. Consider that the entirety of diagnostic lab spending in the US will be approximately $65B in 2012. A study done by McKinsey & Co for the Personalized Medicine Coalition revealed that of $292B spent on medications in 2008, approximately $145B went to drugs and therapies that were ineffective for the patients that took them. Further findings estimate the cost of adverse drug events to range between $45 to $135B per year. 25% of these costs could be averted through the use of diagnostic tests for the appropriate biomarkers.

It becomes clear that the usual attempts to control utilization of diagnostic services in the short run, but prior to well-established multi-year clinical use data, can result in much greater costs to the healthcare system in the long run with adverse impact to patient care. The traditional approach to decision support does not work for diagnostic lab services. Moreover, it must be understood that squeezing lab services on one end of healthcare spending only results in an explosion of expenditures on the other end, if improperly applied. No other sector of healthcare has this level of influence over the remainder of the healthcare spend. With the potential of introducing an additional layer of bureaucracy to diagnostic services in the form of FDA oversight, which will effectively slow down the introduction of diagnostic services and the onerous cuts to reimbursement and access initiated by the Affordable Care Act, will we reduce health care costs and improve care? Or will we increase expenditures while missing the boat on the most promising technological advances, by delaying a paradigm change in healthcare with the potential to impact not just cost but our quality of life?

Providers should not allow external forces to dictate the adoption of their advanced specialized testing, especially since with the rapid introduction and improvements to diagnostic testing, an external panel of physicians who often lack relevant expertise cannot be expected to keep pace with the efficacy of new tests. To this day, AHRQ, an entity that many payors, including CMS, often look to for guidelines, still lists the level of evidence for BRCA testing as fair because there have not been traditional randomized clinical trials. Guideline inclusion is slow to catch up with true clinical practice and is often influenced by political self interests between physician specialties. Providers themselves have the best and latest information on both theirs and competing assays and should stand behind their products and be willing to take risk on pricing in order to both educate the payor community and the ordering physician regarding the value of their services. That means assuring that the service is ordered appropriately and charging for it based on its value. Specialty labs have been very successful at educating payors regarding the optimal use of their assays and negotiating the elimination of costly prior authorization processes, by providing ordering guidelines to physicians that truly reflect optimal use of the test and how best to place it on the diagnostic continuum. This coupled with negotiating pricing that goes down if the assay proves to be less effective than expected and up if it demonstrates patient and cost effectiveness in longitudinal studies, will put the lab provider with the best assay at an advantage and in the driver’s seat.

Technology is the equalizer that has not only disrupted the healthcare continuum, but also allows us to achieve the delivery of prompt and meaningful decision support to the physician’s desk top. This continuing advancement and our ability/desire to integrate with other systems will allow us to achieve as a whole what could not possibly be done independently by any given entity. Despite our reluctance and belated establishment and adoption of standards (5010, ICD-10, LOINC, CPT, HIT, SaaS, PaaS) as an industry, our very existence as viable businesses now depends on a fully cooperative network. Selecting internal technology platforms that allows each provider to deliver its expert content through web services connectivity to the point of service allows the provider to facilitate proper use of diagnostic services while providing physicians with decision support criteria (developed in partnership between providers and payors) that optimize quality of care and value in the healthcare dollar. Aligned incentives between payors and providers have the best opportunity to deliver the promise of personalized medicine.

Sign up for Blog Alerts