This post is part of a series related to health insurance and access to medical care, but before I add to all of the noise resulting from passage of HR 1628 (American Health Care Act), by the House of Representatives on May 4th, I thought it might be a good idea to document what we know (or should know), about the current state of health insurance in America and how we got here. While I always want to encourage readers to express their opinions regarding information found within this weblog, I know that at least one current subscriber worked in the insurance industry for many years, and I would especially welcome any contributions or corrections from them, submitted either through “Comments” or the “Contact Me” link.
Insurance to pay or off-set health care expenses in the United States essentially appeared in the early 20th century, having morphed from disability (workers compensation), insurance developed in the 1800s for railroad and other industrial workers. An early form of health insurance was devised by hospitals during the depression when fee collection was down, but charity care (read “uncompensated care”), kept rising, so they offered prepaid options—an idea that eventually became the Blue Cross system. The program only applied to hospital services, though, and not to doctor’s fees as the AMA didn’t like the idea of prepaid service programs and what it might due to profits. By 1939, however, the Blue Shield system was developed to address physician fees, but was set-up to indemnify policy holders, meaning it would pay the patient for a covered procedure and the patient then had to pay the doctor.
Both Blue Cross and Blue Shield tried to forecast what future claims would be based on previous claims and divide that total forecast by the total number of subscribers to determine everyone’s premium. This was called “community rating”. Other companies in the 1950s, however, began using “experience rating”, meaning they would identify groups of folk and measure their average health care needs. Groups that needed less (young people, for example), posed a smaller liability and consequently could be charged a lower premium while still yielding a profit (a system that also began siphoning the lowest-risk subscribers from the systems utilizing community rating).
Some employers offered assistance for health-related insurance programs in those early years, but employer-paid insurance didn’t really surge until World War II when employees were scarce and wages were capped by federal law. Employers could offer fringe benefits (including insurance), as a way to entice workers. Making the those programs even more attractive was a tax code that did not consider employer-paid premiums as income. Those premiums are still not considered income and therefore are not taxable, meaning that all employer-paid health insurance is federally subsidized and also subsidized by those states which have an income tax.
Employers who offered health insurance chose either Fully-Insured or Self-Insured plans. Fully-Insured meant the company just paid an insurance company to cover their employees. A Self-Insured plan meant that the company paid any employee claims (and premiums for “stop-loss” insurance to cover the odd, catastrophic employee illness), thereby paying less than a fully funded plan, avoiding State Premium taxes and State Insurance regulations.
But clearly not everyone in America worked, and of those who did, they couldn’t work indefinitely. Also, not every employer provided health insurance and no employer was required to do so. In partial response, the Medicare and Medicaid programs were created in 1965, both of which involve fee schedules which cap the reimbursement that health care providers can receive. Medicare was designed to take care of the elderly, who were both less likely to still be working and more likely to have increased medical needs, while Medicaid was designed to assist America’s poorest individuals.
Medicare Part A covers the hospital expenses for folk eligible for Social Security and is almost all paid for via a pay-roll tax on both employers and employees with the remainder covered by general revenue. Medicare Part B covers doctor’s fees and non-hospital related fees or services and is funded by general revenue as well as premiums paid by the participants. Medicare Part D is designed to assist with prescription drugs and is funded primarily by general revenue and user premiums.
Medicaid is a health care insurance program for low-income individuals, though income is not the only criteria. It is a joint venture between the U.S. Government and the states that choose to participate (a portion of state expenditures are matched with federal dollars), and covers some federally mandated hospital and physician expenses along with additional services and fees mandated by the participating state.
So, all bases were covered and the problem was solved. Except that as of 2010 16.3% of Americans had no medical insurance (Uninsured Rates for 2010), which translated to 49.9 million people. When members of that very significant number became ill or were in an accident, they headed to the emergency rooms of the nation’s hospitals, who, due to the Emergency Medical and Treatment Labor Act (EMTLA) of 1986, are unable to deny emergency care to (or fail to continue care once started), any patient based on their inability to pay.
In order to remain solvent while providing uncompensated care, hospitals implemented cost shifting: the process of charging an insured patient more for a service or product than an uninsured patient to offset the missed revenue. No insured patient reviewing their bill was ever supposed to believe that a Tylenol cost $5, but the hospital had to recoup the cost of the free Tylenol they gave the uninsured patient. So technically the uninsured have insurance for some health issues (if they are willing to wait in an emergency room), by piggy-backing onto the coverage the rest of us have and inflating our claims.
Enter the Patient Protection and Affordable Care Act.