Next Target for Hackers in 2018 Medical Data and Wireless Medical Devices
The Next Great Target For Hackers And Digital Troublemakers In 2016 Is Health Care Records And Wireless Medical Devices.
Hospitals Have Weak Security For Mobile Devices
Many health-care organizations fail to protect patient information residing on personal and work-issued mobile devices used in patient care, according to a Forrester Research Inc. survey published Thursday. Hospitals are risking costly data breaches that could result in regulatory compliance penalties and civil suits issued by consumers.
Clinicians are using laptops, smartphones and tablets to exchange patients’ electronic medical records with colleagues, ideally to hasten care and make it more efficient. However, only 59% of health-care IT professionals said they encrypt such devices – a surprising figure given the stakes, said Chris Sherman, a Forrester analyst who wrote the report. “We expected the number to be higher,” he said. “This shows that health care has a way to go before they can say that they have data protection.”
That means hospitals, which traditionally hold the weakest cybersecurity posture, are putting patients’ sensitive information at risk in the event that these devices are lost or stolen, a fairly common occurrence, according to the Forrester report. Since 2005, 39% of health-care security incidents include a lost or stolen device. Those incidents account for 78% of all reported breached records originating from health care, said Mr. Sherman. “Endpoint data security must be a top priority in order to close this faucet of sensitive data,” he said.
But it isn’t and there are a few reasons why, according to Mr. Sherman.
Many health-care CIOs underestimate the potentially lucrative value of patient records on the black market, believing that hackers are more interested in credit or bank card information. One CIO for a large health care organization told Mr. Sherman: “We are not worried about hackers stealing personal health information; they are more focused on credit card and intellectual property, neither of which we carry much of.” But Mr. Sherman said that health care records can fetch anywhere from $20 for a single health record and over $500 for a more complete patient dossier.
Murky regulations further muddy the waters. Neither the Health Insurance Portability and Accountability Act (HIPAA) nor the Health Information Technology for Economic and Clinical Health (HITECH) Act, two laws which govern data privacy and security in health care, explicitly require data encryption on devices containing sensitive data. Modest spending on IT security may also be a factor. Only 18% of hospitals’ IT budget goes to security compared with 21% across other industries.
High-profile breaches are causing CIOs to ratchet up their resource allotment on security. Last month, Community Health Systems Inc. said that hackers took data related to some 4.5 million patients. New York-Presbyterian Hospital and Columbia University in May paid the government $4.8 million to resolve charges resulting from the exposure of 6,800 patient records in 2010.
To stem data leakages, Mr. Sherman recommended that hospitals adopt better encryption practices and virtualize desktop and applications to prevent local storage of data, recommends Mr. Sherman. Hospitals must also monitor where data resides at all times, and limit data access to those individuals whose job function requires it. Hospitals should teach employees safe computing practices, threatening them with consequences for failing to act appropriately.
According to a long report based in part on research by the Information Security Institute at Johns Hopkins University, despite numerous technology standards written into federal regulations, the many ways that health care professionals access health information about their patients are riddled with holes.
In one case, residents at the University of Chicago Medical Center used a shared folder on Dropbox that allowed them to access patient records on their iPads. In another, OpenEMR, an open-source medical records system that had been adopted agency-wide by the Peace Corps, was found to have numerous flaws that opened it to attacks by hackers. Many of the weaknesses found were described as being pretty basic – or as one source quoted in the story put it, “security 101.”
Part of the problem is that the last government guidelines on this issue were published in 2005, and thus aren’t up to speed with what are now considered everyday practices.
More troubling than the vulnerabilities – which expose only the potential for an attack – are the anecdotal bits of evidence that attacks are actually taking place. At the Department of Veterans Affairs, there were nearly 200 instances of medical devices infected with malware between 2009 and 2011.
In another case, a server in Utah storing Medicaid data on nearly 800,000 people was attacked earlier this year. The attack was traced to a server in Eastern Europe , though as is always the case with these things, it’s impossible to know exactly where the person carrying out the attack was situated.
The Convenience of Accessing Medical Records Online:
As the health-care industry rushed onto the Internet in search of efficiencies and improved care in recent years, it has exposed a wide array of vulnerable hospital computers and medical devices to hacking, according to documents and interviews.
Security researchers warn that intruders could exploit known gaps to steal patients’ records for use in identity theft schemes and even launch disruptive attacks that could shut down critical hospital systems.
A year-long examination of cybersecurity has found that health care is among the most vulnerable industries in the country, in part because it lags behind in addressing known problems.
“I have never seen an industry with more gaping security holes,” said Avi Rubin, a computer scientist and technical director of the Information Security Institute at Johns Hopkins University. “If our financial industry regarded security the way the health-care sector does, I would stuff my cash in a mattress under my bed.”
Compared with financial, corporate and military networks, relatively few hacks have been directed at hospitals and other medical facilities. But in recent months, officials with the Department of Homeland Security have expressed growing fear that health care presents an inviting target to activist hackers, cyberwarriors, criminals and terrorists.
“These vulnerabilities may result in possible risks to patient safety and theft or loss of medical information,” a DHS intelligence bulletin said in May.
Security researchers are starting to turn up the same kinds of trivial-seeming flaws that earlier opened the way for hackers to penetrate financial services networks, Pentagon systems and computers at firms such as Google.
Rubin has documented the routine failure to fix known software flaws in aging technology and a culture in which physicians, nurses and other health-care workers sidestep basic security measures, such as passwords, in favor of convenience.
Another researcher found that a system used to operate an electronic medicine cabinet for hospital prescriptions in Oklahoma could be easily taken over by unauthorized users because of weaknesses in the software interface.
OpenEMR, an open-source electronic medical records management system that is about to be adopted worldwide by the Peace Corps, has scores of security flaws that make it easy prey for hackers.
The University of Chicago Medical Center operated an unsecure Dropbox site for new residents managing patient care through their iPads, using a single user name and password published in a manual online.
After a Post reporter called about the vulnerabilities, officials at the cabinet manufacturer and the medical center took steps to close the gaps. The Peace Corps said it was considering changes.
Government oversight and industry practices have not kept pace with the changing technology. The Food and Drug Administration, which is responsible for overseeing medical devices, most recently published guidance on cybersecurity in 2005.
The agency has urged hospitals to allow vendors to guide them on security of sophisticated devices. But the vendors sometimes tell hospitals that they cannot update FDA-approved systems, leaving those systems open to potential attacks. In fact, the agency encourages such updates.
“A lot of people are very confused about FDA’s position on this,” said John Murray Jr., a software compliance expert at the agency.
A Government Accountability Office report in August noted that defibrillators and insulin pumps are vulnerable to hacks. In July, one researcher-hacker was able for the first time to use a specialized search engine called Shodan to discover a medical device, a wireless patient glucose monitor in Wisconsin, linked to the Internet and open to hacking.
The Department of Health and Human Services is overseeing the move to electronic health records systems, some of which have documented security vulnerabilities.
John Halamka, a physician and Harvard University professor who is co-chairman of the HHS health information technology standards committee, said security in the health-care industry is “not as good” as in other industries. But he added that the industry is aware of the problems and is scrambling to make improvements.
“It’s completely headed in the right direction,” he said.
But Laurie Williams, a computer scientist at North Carolina State University, said health care remains widely vulnerable.
“There are basic, basic, Security 101 vulnerabilities we identified,”said Williams, who was among a team of researchers that identified numerous security flaws in several electronic heath records systems two years ago. “I’m concerned that at some point the hackers are really going to begin exploiting them. And that’s going to be a scary day.”
Questions about the cybersecurity of medical systems have been simmering for more than a decade. But the issue has intensified as hospitals embrace wireless devices and electronic records. Some health-care officials assumed that their networks were too obscure, or offered too few financial enticements, to be of interest to hackers.
Information technology executive Peter Tippett , the chief medical officer for Verizon, said the threat from cyberspace should not be overstated. Simple theft of laptops and other devices make up the bulk of incidents.
“The fact is, there aren’t many attacks,” said Tippett, who oversees ICSA Labs, an independent division of Verizon that tests electronic health records systems and other security products for government certification. “The bad guys so far at least have been looking for money.”
Still, Tippett acknowledged that health care ranks near “the bottom of the list” of industries in terms of cybersecurity. “It’s about like retail,” he said.
In July, a consortium of hospitals, health plans, pharmacies, drug companies and government agencies called the Health Information Trust Alliance launched a cybersecurity incident response and coordination center to defend against “cyber crime, cyber espionage and cyber activism.”
No one knows exactly how many intrusions have occurred, but anecdotes are mounting. Medical devices at Veterans Affairs facilities were infected by malicious viruses at least 181 times from 2009 to 2011, according to the DHS intelligence report that surfaced in May.
On March 30, a hacker broke into a network server at the Utah Health Department, gained access to Medicaid data about 780,000 people and stole an undetermined number of records. Authorities traced attackers to computers in Eastern Europe. Utah officials acknowledged the breach and said they are taking extensive measures to protect patients against identity theft.
HHS officials said health-care providers must combine cultural, practical and technological solutions to defend against theft and hacking. The officials also said that they have ramped up enforcement efforts against organizations that failed to protect patient information.
“While there is always more work to do, we have reached record settlements against companies who violated privacy laws and sent a message to everyone that privacy violations will not be tolerated,” said Leon Rodriguez, director of the HHS Office for Civil Rights.
Three years ago, Rubin, the Johns Hopkins researcher, began assessing systems at major hospitals and clinics, making visits to operating rooms and intensive-care units.
He found that doctors and medical workers used the same computers to connect to both the Internet and internal networks. Rubin said doctors become “a pipeline for attackers into the sensitive networks.”
One nurse told Rubin that she had the job of typing in a physician’s password constantly so that the doctor would not have to, leaving the unattended machine unprotected. “She literally walked around the room logging the doctor into every machine, every hour,”Rubin said.” Unbelievable.”
He declined to name the institutions he studied because to do so would violate his research agreements.
“The doctors and technicians I spoke with seemed mostly well aware that their systems are vulnerable,” said Rubin, who has previously found security problems in voting machines. He said that health care “is an industry with the least regard, understanding and respect for IT security of any I’ve seen, and they have some of the most personal and sensitive information of anyone.”
Another researcher, Tim Elrod, a consultant at FishNet Security, found vulnerabilities in a system that enables care providers using a Web browser to automatically dispense drugs from a secure cabinet produced by Omnicell.
Working with Stefan Morris, Elrod discovered that unauthorized users could sidestep the login and password page and gain control of a cabinet at a hospital run by Integris Health, the largest health organization in Oklahoma. They used a well-known hacking technique called a “forced browsing” attack.
“At that point, we had full administrative control,” Elrod said. “We could do anything.”
After being contacted, Peter Fisher, vice president of engineering at Omnicell, said he “is launching an immediate investigation into this reported vulnerability.” The same day, the company issued a software fix to customers around the globe.
“Omnicell is committed to delivering the highest level of data security to our customers as demonstrated by our regular release of software updates, which include security enhancements,” Fisher said.
John Delano, chief information officer for Integris, confirmed the Omnicell flaw and said his company last year disconnected it from any networks that might link to the Internet.
“Unfortunately, a lot of times you run into vendors who have poorly coded software,”Delano said. “That’s the case here.”
After an inquiry, a researcher at the University of Florida, Shawn Merdinger, found flaws in the use of wireless iPads by new medical residents at the University of Chicago Medical Center.
Merdinger found a manual for the iPad initiative posted online, publishing a single user name and password for all the residents to use a shared Dropbox account. The idea was to promote collaboration.
But the arrangement opened the medical center to “social engineering” attacks, where hackers plant documents, such as PDFs, that are loaded with malicious code. Once the documents are uploaded, the iPads could become infected, handing over control of hospital networks to hackers.
After the medical center was alerted, officials closed the gap.
“This Dropbox account was intended to be used only to share educational material among residents,” Cindy Kitching-Pena, director of the Department of Medicine Education Programs, said in a statement. “Nevertheless, the username and password to the account have been changed, and the account will be terminated.”
In February 2009, Congress mandated the widespread adoption of electronic health records ( EHR) computer systems as part of the stimulus legislation known as the American Recovery and Reinvestment Act. The law included as much as $36 billion in stimulus funding to promote the “meaningful use” of such systems. It was the Obama administration’s first big step toward health-care reform.
Since then, tens of thousands of doctors, hospitals and other health-care operations have received more than $8.1 billion in government payments, and they have begun using the systems to digitize and share millions of patients’ records in ways that proponents say will save billions of dollars and improve care.
The law required electronic health records systems to be certified by independent labs to meet an array of standards established by HHS, but those standards include few security provisions, according to documents and interviews.
Officials have known for years about vulnerabilities in the systems. In 2007, the eHealth Vulnerability Reporting Program, a group that included senior health-care officials, concluded that “commercial EHR systems are vulnerable to exploitation given existing industry practices” and that the “skill level required to exploit is low.”
Two years ago, Williams, the North Carolina State researcher, and her colleagues found common flaws in four systems that would expose users’ login information and enable outsiders to access patients’ records.
The group’s report urged rigorous security testing before electronic health record vendors could be certified for stimulus funding.
Federal officials have not gone that far, but Farzad Mostashari, the national coordinator for health information technology at HHS, said they ” have taken important steps with vendors to make electronic health records more secure,” such as requiring encryption of data on laptops.
Among the systems that HHS has certified is OpenEMR , an open-source software developed by a nonprofit charitable group called OEMR. The software can be downloaded for free.
Williams’s group – along with several white-hat hackers – has found hundreds of vulnerabilities in the system.
OEMR’s leaders acknowledged the flaws but said it would take an experienced hacker to exploit them. Chief technology officer Kevin Yeh said his group fixes problems as soon as it learns about them and that other Web-based systems probably have the same weaknesses.
He added that federal certification standards “are not sufficient.”
A heart defibrillator remotely controlled by a villainous hacker to trigger a fatal heart attack? It may only happen in the movies, but the Government Accountability Office (GAO) doesn’t want to take any chances.
In a recent report from the GAO, the non-partisan agency, which investigates issues for Congress, says the threat that hackers could manipulate heart defibrillators and other remotely controlled medical devices to fatal ends is real enough for the U.S. Food and Drug Administration (FDA) to take action.
Often referred to as the “congressional watchdog,” the GAO says implantable devices such as defibrillators that jolt failing hearts back into rhythm, pacemakers that resync irregular heart beats, or insulin pumps that maintain proper insulin levels for diabetics, are at risk for hacking. In the report, the GAO reviewed research published by information security specialists and studies in peer-reviewed journals and determined that these devices are indeed vulnerable to sabotage.
Not only can the normal function of the devices be tampered with, but important, and private, health information collected by the devices is routinely uploaded to patients’ health records. While no cases of hacking have yet been reported among users, some well-publicized cases of security specialists recently showed that it was possible, and alarmingly easy, to hack insulin pumps. That prompted the Congress to look into the security issues.
The GAO is requesting the FDA, which regulates the safety and effectiveness of medical devices and is also responsible for approving medical devices, to develop a plan to address the security risks. The report says the FDA considered risks from unintentional threats, such as ordinary magnets and airport security scanners, but not intentional ones during pre-market approval for devices.
“Even the human body is vulnerable to attack from computer hackers,” said California representative Anna Eshoo, one of three Congress members to request the report, in a statement.
“Implantable medical devices have resulted in tremendous medical benefits for the patients who use them, but the demonstrated security risks require a renewed emphasis by the FDA and manufacturers to identify, evaluate and plug the potentially rare but serious security holes that exist in these devices.”
As NBC news reports, doctors use wireless communication systems to download diagnostic and function status information from their patients’ medical devices and make changes to the devices virtually. The GAO says that while the FDA as an adverse event reporting system in place, it doesn’t necessarily address the issue of information security problems. “Because information security in active implantable medical devices is a relatively new issue, those reporting might not understand the relevance of information security risks,” the authors write.
The site also acknowledges that the added security measures the GAO is calling for won’t be easy to implement.
In its report, the GAO offers several actions the FDA could adopt to create a more comprehensive security review process, including putting a greater burden on manufacturers to identify and address potential security risks during the pre-market approval process as well as establishing a separate entity responsible for assessing the safety of wireless devices from potential hacking.
“FDA shares the concern over the security and privacy of medical devices, and emphasizes security as a key element in device design,” the agency wrote in response to the report in a statement. “Any system with wireless communication can be subject to interception of data and compromised privacy as well as interference with performance that can compromise the safety and effectiveness of the device.”
Medical device manufacturers are becoming increasingly aware of security risks, but computer-security researcher Jay Radcliffe who found security risks in his own insulin pump, told Bloomberg Businessweek he understands the time and money obstacles. “I can very much sympathize with the manufacturers’ concerns,” Radcliffe told Bloomberg. “When you’re dealing with this much vagueness, and you’re dealing with a security vulnerability where the risk is really, really low, you go to the FDA and say you want to change this device and it could be $500,000 and four years of time. In some cases, smaller manufacturers could go out of business.”
Still, the potential for breaches in privacy, not to mention serious health consequences, including death, that could arise from vulnerable wireless medical devices, is starting to alarm more government groups. Wired reports that the National Information Security and Privacy Advisory Board to pen a letter to the Office of Management and Budget requesting reforms in the oversight of device approvals.
It sounds like a scenario out of a James Bond movie: a villain spots his quarry and uses a small device to hack into the official’s heart defibrillator, sending a signal for mayhem. There’s chest grabbing, and a collapse, and alarms, but the bad guy walks free because there’s no gun, knife, poison dart — no evidence at all a murder has been committed.
According to a recent report by the Government Accountability Office (GAO), a non-partisan agency that works for Congress, not only is such a scenario possible, there’s a growing danger that grandpa’s heart rhythm device, or, say, a child’s insulin pump – any implantable device that can be accessed remotely — could be susceptible to hacking.
But the GAO report suggests that the Food and Drug Administration, which approves and regulates such devices, has been behind the curve when it comes to security and now is calling for the agency to set guidelines for manufacturers to help combat the threat of hacking.
According to the report, which had been requested by members of Congress in light of tests by researchers revealing the vulnerability of the medical technology, “there have been four separate demonstrations in controlled settings showing that the intentional exploitation of vulnerabilities in certain medical devices is possible.” The report stressed that there have been no proven cases of anybody actually doing this for nefarious purposes.
Still, when he released the GAO report, Congressman Edward J. Markey (D-Mass.), one of the requesting legislators, issued a statement saying that “wireless medical devices are susceptible to increasingly advanced hacking techniques that could threaten patient health.”
The susceptibility stems largely from their wireless communications abilities, explained Nathanael Paul, chief scientist at the Center for Trustworthy Embedded Systems at Oak Ridge National Laboratory. In 2010, Paul and a colleague demonstrated they could hack into an insulin pump, like the one Paul himself wears to treat his Type 1 diabetes.
Thanks to wireless communication, doctors can download diagnostic information and health status from the device to a computer and make changes in the performance of a device without surgery. For example, defibrillators can be programmed using a wand that communicates with the device inside a patient’s chest.
But as anybody who has experienced neighborhood confusion over garage door openers operating the wrong doors can attest, that can leave devices vulnerable to attack or simple accident.
“This year, for example, a young lady in the Salt Lake City airport asked TSA if she should walk through a security device,” Paul recalled by way of illustration. “TSA said yes, basing that on the experience of thousands of people with insulin pumps. She walked through and her pump responded in an unanticipated way that could have threatened her life.”
Causing a device to misfire intentionally to cause harm takes technical sophistication, but it’s certainly do-able, he said. And there are a lot of potential targets. According to a 2011 report from the World Society of Arrhythmias, in just one year, 2009, 133,262 defibrillators were implanted in patients in the United States — 434 devices for every million people — and that’s just one device for one condition.
Preventing potential hacking might seem as simple as requiring a password for access. Another strategy could be to limit the distance devices can send information back and forth. The tests demonstrating vulnerability showed the range of some devices could be up to 300 feet. Paul has been exploring that possibility. Software changes are another avenue.
But enhancing security of a vital medical device isn’t as simple as it sounds. The primary purpose of any medical device is to preserve health, not keep out bad guys. Installing security software could put more demand on battery life, for example. And suppose a patient has a defibrillator, his doctor’s office is closed, and he feels chest pains? He could go to an emergency room, but, panicky, could easily forget the password. The ER doctors then could not get access to whatever the device has to tell them.
Whatever solutions are to be found, the onus for making sure manufacturers implement them has now fallen mainly on the FDA.
The GAO recommended the FDA make security risks part of premarket approval just like FDA’s more traditional criteria, safety and effectiveness. It also suggested FDA begin working with other federal agencies whose primary duties focus more on cyber security, make the issue one of the things it monitors during postmarket review, and “establish specific milestones for completing this review and implementing these changes.”
Markey, through his office, told NBC that FDA must place “renewed emphasis” on the security of the devices under its purview. “I look forward to hearing from the FDA on progress to address this risk,” he said.
The agency does seem to be gearing up to take a more aggressive stance. In its response to the GAO report, the FDA noted that it has recently begun collaborating with the Department of Homeland Security , the National Institute of Standards and Technology, and the Department of Defense, and law enforcement. The FDA’s Center for Devices and Radiological Health has also a National Postmarket Surveillance Plan to better track adverse events related to devices.
“FDA concurs with GAO that the agency continuously develop and implement new strategies designed to assist the agency in its medical device premarket review and postmarket surveillance efforts relative to information security,” agency spokesperson Michelle Bolek told NBC.
The agency is studying the ways other industries are battling cyber security threats for any strategies manufacturers can incorporate into the devices. That’s where researchers like Paul come in.
The FDA and industry have begun consulting with him and others, he said, and he’s optimistic about progress. “I think they are doing a large amount of work. They are responding, and so are manufacturers,” said Paul, who doesn’t personally profit from such consulting.
He also argued that the potential risk to the security of medical devices is far outweighed by the benefits of the devices.
In other words, don’t panic.