Health

Medical Device Insecurity: Diagnosis Clear, Treatment Hazy

An increasing number of healthcare professionals have become alert to the need for well-rounded medical device security in recent years, and players throughout the industry have started putting more effort into raising the bar.

An optimistic observer might point to strides toward reaching that goal. Developers have become aware of the most glaring holes, and more information security researchers have been brought into the fold.

If nothing else, the formation of advocacy groups like I Am The Cavalry and the simple uptick in the number of vulnerability disclosures have started to chart a course toward medical devices that are resilient against attack.

Preexisting Conditions

A presentation at last month’s Black Hat security conference revealed severe flaws in pacemakers currently on the market. Their manufacturer’s unwillingness to address the vulnerabilities makes clear the extent to which medical device security has been plagued by lack of cohesion among major health sector players and poor security hygiene among developers.

Why, despite the undeniable gains that medical devices have made, are there still gaping holes like the ones exhibited at Black Hat? Like the most intractable medical conditions that physicians sometimes must diagnose, the cause is rooted in multiple compounding maladies.

To start with, the operating conditions of medical Internet of Things devices — which encompass everything from connected insulin pumps to networked CT scanners — differ notably from those of their consumer IoT counterparts.

A key distinction is their markedly longer lifecycle, often so long that it outlives the support cycle for the operating systems they run, according to physician and security researcher Christian Dameff.

“[With] consumer IoT, there’s maybe iterations of devices regularly, like every year or something like that,” Dameff said. “Healthcare connected devices are expected to be in service for five, 10-plus years, which might be the case for something like a CT scanner, and guess what? They’ll be running Windows XP, and Windows XP will be end-of-life support by year three.”

In fact, the regulatory process that new connected medical devices must go through is so lengthy — understandably so — that they typically are years behind modern security trends by the time they hit the market, as security researcher and I Am The Cavalry cofounder Beau Woods pointed out.

“Any device that comes out brand new today probably had a several-year research and development phase, and a several-month to several-year approval phase from the FDA,” Woods said.

“You can have devices that were essentially conceived of eight to 10 years ago that are just now coming out, so of course they don’t have the same protections that are in place today [or] have modern medical device architectures — to say nothing of the devices that came out 10 years ago and are still perfectly usable, like MRI machines,” he explained.

The needs that always-on networked medical devices must meet, especially those of implanted devices like pacemakers, present additional operating constraints. Desktop OS developers have had decades to accrue the experience to determine best practice exploit countermeasures. However, headless medical IoT devices with zero allowance for downtime rule out many of those very countermeasures, necessitating the development of new ones that are suited for medical deployment.

What’s the Diagnosis, Doc?

Traditional controls definitely fall short in certain medical settings, but that can encourage innovation from developers working under specific constraints, noted Colin Morgan, director of product security at Johnson & Johnson.

“Sometimes the difference in this environment is we need to make sure that the security control doesn’t affect the intended use of the device,” Morgan said. “Let’s say a session lock on your machine. You walk away from your desk for 15 minutes, your screen locks. On some medical devices, that could defeat the intended use of that, and our job — which is the fun part of the job — is to figure out, ‘If we can’t do that control, what other controls are there to mitigate the risk?'”

As much as the unique requirements of medical hardware have invited creative new security controls, the initiative often has been undermined by an inadequate incentive structure for doing so.

Current regulation, while leaps and bounds from where it once was, doesn’t always dissuade manufacturers from dismissing potentially life-threatening vulnerabilities, particularly in a landscape where there is, thankfully, as yet no precedent for what happens when they are exploited in the wild.

“I don’t think this is intentional, [but] think about this: If I was a device manufacturer and I’ve got a malfunctioning device, would I write a policy to do a deep forensic investigation on every device to look for malware?” Dameff asked.

“The answer is no,” he said, “because once I find out that there’s been a compromise, and that there’s a vulnerability, I’m required to report that to the FDA, which could result in exorbitant recalls, fines, etc. So the incentive to find these types of patient harm situations, it just doesn’t exist.”

An absence of incentive is in some respects the best case scenario, since the present regulatory framework diverts resources away from engendering a holistic security posture, and sometimes precludes avenues for discovering flaws entirely.

No legislation looms larger in healthcare regulation than the Health Insurance Portability and Accountability Act, better known as “HIPAA.” It is undoubtedly a landmark in patient protection in the digital age, but its singular focus on privacy and the fact that it its authorship predates widespread medical IoT has yielded some unintended detrimental consequences for device security.

Dameff put it bluntly: When breaching the privacy of patient data can cost companies significantly more than the breach of a device’s security controls, companies order their priorities accordingly.

“Healthcare’s scared of the HIPAA hammer, and that drives all of the security conversations,” he said. “Securing the patient healthcare information gets all their resources, because risking a breach has consequences that pay out in dollars and cents.”

HIPAA’s preeminence not only tips the scale in favor of overwhelmingly addressing privacy, but it occasionally can obstruct security research altogether. In scenarios where privacy and security are mutually exclusive, HIPAA dictates that privacy wins.

“If [a device] malfunctions and we’ve got to send it back to the device manufacturer [to figure out] what’s going on with it, by principle and because of HIPAA, they wipe the hard drive or remove the hard drive before they send it to them.” Dameff said.

“By policy, malfunctioning devices that have malfunctioned so bad they get sent back to the manufacturer can’t even go with the operating system, the software in which it malfunctioned,” he noted.

Time for Treatment

In spite of the many facets of medical IoT security woes, there are encouraging signs that the industry has been finding its footing and coalescing around next steps. One such course that has received much praise is the FDA’s issuance of two guidance documents: “Design Considerations and Pre-market Submission Recommendations for Interoperable Medical Devices” and “Postmarket Management of Cybersecurity in Medical Devices” — or Pre-Market Guidance and Post-Market Guidance for short.

“I will say that the FDA has come a long way in terms of giving guidance to medical device makers on how they should interpret regulations, how the FDA is interpreting regulations,” Woods said.

“So when the FDA puts out things like its Pre-Market Guidance for Cybersecurity of Medical Devices or its Post-Market Guidance for Cybersecurity of Medical Devices, that helps both the regulatory side and the device makers figure out how to build devices that do take these lessons learned into account,” he added.

More than perfunctorily complying with the guides’ requirements, a few players have made a point to incorporate many of the optional recommendations they outline. Speaking specifically for his organization, Johnson & Johnson’s Morgan remarked that his team has benefited from a mutually reinforcing relationship with the FDA.

“From our perspective, we have seen a lot of work that has been done over the past [few] years that has initially been driven through the FDA,” he said. “We work very closely with them — we have a very collaborative relationship with the FDA cybersecurity team — and through the starting of the guided documentation around pre-market and then post-market … there’s been a bit of a shift, and [we] are really building [them] into our quality systems.”

This climate of cooperation between regulators and manufacturers is vital to bolstering security industry-wide, because it changes the dynamic from jockeying for competitive advantage to ensuring a basic level of patient safety.

Collaboration shouldn’t, and soon won’t, stop there, Morgan suggested. One ongoing endeavor, spearheaded by the Health Sector Coordinating Council, is to create a “playbook” comprised of expertise contributed by healthcare providers, device makers, trade associations and others.

It would provide guidance on what organizations of all types could do to improve security practices. By disseminating knowledge derived from the work of large companies, smaller ones could solicit collected wisdom.

In the meantime, there is as much to be learned and absorbed from the information security and developer communities outside of healthcare as there is from the extant guidance documentation.

Considering the lag between development and release due to regulatory oversight, it is that much more important for manufacturers to get it right the first time, and that means changing security from a supplemental exercise to one that is intrinsic to development.

“I don’t think we need medical security specialists. We just need these good practices to be built into the architectures, engineering and operation of the devices from the get-go,” said I Am The Cavalry’s Woods, “which is going to take, I think, some rethinking of what we’ve always thought of as the traditional way.”

The way medical device developers adopt this approach is by further engaging and integrating the independent research community, Dameff added.

“I think you need to be open to security researchers’ input and independent security testing of your devices before it hits market,” he suggested. “Even if the device manufacturer releases a patch for it, maybe the hospital won’t actually deploy it. So we need to be doing a lot of work up front to get these as secure as possible before they hit market.”

Even as companies have grown more comfortable with processing bug disclosures from independent researchers, some companies remain stubborn, as last month’s Black Hat talk demonstrated. The presenters stated that the manufacturer they had disclosed their findings to had not acted, as of more than 500 days after receiving notice.

“There are horror stories,” Dameff said. “I feel like healthcare device manufacturers realize they can’t scorn researchers … this much anymore, partly because there’s a DMCA exemption for medical devices that’s currently in place.”

The DMCA, or Digital Millennium Copyright Act, exempts good faith researchers testing medical devices from the legal peril of probing into proprietary software, a lifeline for bug bounty hunters.

However, for researchers to make the most of the exemption, it’s essential not only that manufacturers take their input seriously, but also that the industry and its regulators allow access to as much real-world data as possible.

Woods’ organization, I Am the Cavalry, outlines measures for meeting those requirements.

“One of the things that we’ve got in the [I am the Cavalry] Hippocratic Oath is an affirmatively sound evidence capture capability that allows you to trap potential security issues, or really any kind of failure of the device, in a way that preserves privacy,” Woods said.

“So we’re not throwing privacy out for the sake of safety, because I think they’re not mutually exclusive,” he continued, but it’s critical “to be able to get the types of logs and information that you need off the device — like firmware state, was it tampered with, was it the latest version, were there any extra programs, unexpected software.”

Finally, as Morgan put it, all of this has to meet the care providers’ needs, which can be done only by bringing them fully into the conversation.

“One of the biggest challenges we face is the post-market management,” he noted. “How can we roll our security patches to devices better in customer environments? Customer environments are all so different. So we have to constantly talk to and understand from our customers what they’re looking for from us, what their expectations are, and how we can partner better with them to roll patches out, build in what they’re looking for, so that we’re constantly reducing risk together.”

Scheduling Checkups

Ultimately, treating the poor state of medical device security is like treating patients themselves: The overall treatment must be holistic, and the various treatment measures must not conflict.

Where regulators, manufacturers and providers are in accord, there has been marked security improvement. It is where their perspectives conflict that conditions have yet to improve.

Jonathan Terrasi

Jonathan Terrasi has been an ECT News Network columnist since 2017. His main interests are computer security (particularly with the Linux desktop), encryption, and analysis of politics and current affairs. He is a full-time freelance writer and musician. His background includes providing technical commentaries and analyses in articles published by the Chicago Committee to Defend the Bill of Rights.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Jonathan Terrasi
More in Health

Technewsworld Channels