Is Data Quality The Biggest Threat To Obamacare?

by Jeremiah Johnson

The enrollment system must interact with integrated data from a wide range of federal and state hosted data sources.

big data quality obamacareWith the political problems hovering around the launch of The Affordable Care Act (“Obamacare”) now seemingly left behind, the biggest threat to the Affordable Care Act is now a technical one.  Initial problems involving massive traffic surges leading to website unavailability were unfortunate but hardly unexpected.  Many other high-demand services such as Twitter have experienced similar issues in their infancy.  Far more worrying are the continuing data problems which, if left unchecked, could see people left without insurance.

Initial reports from insurers have given numbers anywhere from 50% to 99% of applications being either incomplete or corrupted.  Insurers have so far seen errors such as people enrolled multiple times, the mixup of spouses and children as well as accounts with multiple spouses.  Other issues evolve around the quality of data being taken from other data sources; some data is missing while other data sets are causing errors. Each time there is a data problem, insurers need to manually reach out to people to check their information to make sure they qualify for the insurance that they want.  This is giving insurers both a cost and a time headache because new people must be hired to do this job and it has to be done quickly.

One of the main problems is the “exchanges” are not just insurance marketplaces but are also used to administer the subsidies that many low-income people receive towards their insurance.  In order to determine if someone is eligible and their benefit values, the system needs to interact with integrated data from a wide range of data sources including both federal and state agencies.  Checks must be done on IRS data regarding income and then the applicant may also be checked against other program data from sources such as the the Peace Corps, Department of Defense, Veteran’s Heath Administration, and state-hosted Medicaid programs.  Each of these systems is designed differently and handles data in a different way which compounds the challenge to collect the correct data and integrate it into one enterprise-level data warehouse for the system to rely upon.

What is a Vision Workshop?

But all of this information was available in the buildup to creating The Affordable Care Act – so why wasn’t the system architected correctly?  It comes as no surprise that data would have to be collected from a variety of disparate sources and would need to be integrated seamlessly for the system to work.  One can only assume that this became immediately obvious within the requirements gathering and design stages but the drama and pressure of the political arena grew too large for the complexities of what has become a very large data project.  

So what went wrong? The Affordable Care Act, like many other projects before it (both political and commercial), has suffered from scope creep and has outgrown both its budget and aggressive timeline. While a report in June indicated that $394 million had already been spent on the project, this is a relatively paltry amount for a federal project of this size – larger amounts have been wasted with far less to show for it in the past.  As January begins, and along with it the first insurance plans given out under The Affordable Care Act, we will soon see just how large a problem big data integration has become for our government.

SuccessStory_EatonINT

原创文章,作者:ItWorker,如若转载,请注明出处:https://blog.ytso.com/290091.html

(0)
上一篇 2022年9月26日
下一篇 2022年9月27日

相关推荐

发表回复

登录后才能评论