2021 Healthcare Cybersecurity Priorities: Experts Weigh In

Hackers are putting a bullseye on healthcare. Experts explore why hospitals are being singled out and what any company can do to better protect themselves.

Healthcare cybersecurity is in triage mode.

As systems are stretched to the limits by COVID-19 and technology becomes an essential part of everyday patient interactions, hospital and healthcare IT departments have been left to figure out how to make it all work together, safely and securely.

Most notably, the connectivity of everything from thermometers to defibrillators is exponentially increasing the attack surface, presenting vulnerabilities IT professionals might not even know are on their networks.

[Editor’s Note: This content is part of an exclusive FREE Threatpost Insider eBook that examines COVID-19’s current and lasting impact on cybersecurity. Get the whole story and DOWNLOAD the eBook now – on us!]

The result has been a newfound attention from ransomware and other malicious actors circling and waiting for the right time to strike.

Rather than feeling overwhelmed in the current cybersecurity environment, it’s important for healthcare and hospital IT teams to look at security their networks as a constant work in progress, rather than a single project with a start and end point, according to experts Jeff Horne from Ordr and G. Anthony Reina who participated in Threatpost’s November webinar on Heathcare Cybersecurity.

“This is a proactive space,” Reina said. “This is something where you can’t just be reactive. You actually have to be going out there, searching for those sorts of things, and so even on the technologies that we have, you know, we’re, we’re proactive about saying that security is an evolving, you know, kind of technology, It’s not something where we’re going to be finished.”

Healthcare IT pros, and security professionals more generally, also need to get a firm handle on what lives their networks and its potential level of exposure. The fine-tuned expertise of healthcare connected machines, along with the enormous cost to upgrade hardware in many instances, leave holes on a network that simply cannot be patched.

“Because, from an IT perspective, you cannot manage what you can’t see, and from a security perspective, you can’t control and protect what you don’t know,” Horne said.

Threatpost’s experts explained how healthcare organizations can get out of triage mode and ahead of the next attack. The webinar covers everything from bread and butter patching to a brand-new secure data model which applies federated learning to functions as critical as diagnosing a brain tumor.

Click on the image below to replay the webinar. Alternatively, a lightly edited transcript of the event follows below.

2020 Healthcare Cybersecurity Priorities: Data Security, Ransomware and Patching

Link Opens Separate Browser Window: Registration Required

Becky Bracken, Threatpost:

Hi, and welcome to Threat Post webinar today. Thank you so much for joining. We have an excellent conversation planned on a critically important topic, Healthcare cybersecurity.

My name is Becky Bracken, I’ll be your host for today’s discussion.

Before we get started, I want to remind you there’s a widget on the upper right-hand corner of your screen where you can submit questions to our panelists at any time. We encourage you to do that.

You’ll have to answer questions and we want to make sure we’re covering topics most interesting to you, OK, sure.

Let’s just introduce our panelists today.

First we have Jeff Horne. Jeff is currently the CSO at Ordr and his priors include SpaceX. So we want to thank Jeff so much for joining us today. We’re also joined by G Anthony Reina, he’s currently the chief architect for Health and Life Sciences and Intel. Tony’s a physician, and he’s got extensive experience in AI, neuro physiology, excuse me, telemedicine and data science.

So thank you both so much for joining us.

Before we get started, we want to have a quick poll. We just wanted to take a quick temperature of where you guys are.

I’m going to launch this right now, and it should pop up in your screen.

The question is, over the past year, my organization saw an increase in data breaches or cyber attacks. Yes or no, We want to know if this is something you’re actively grappling with or if this is something that you’re sort of planning ahead to combat. So I’m gonna give you just a few seconds to finish that up.

It’s pretty close, actually.

OK, we’ll go ahead and close this.

Our responses are 61, 61 percent of you say yes, you have dealt with a data breach or cyberattack in the past year. And 39 percent of you said “no.”

So that’s an interesting split. It’s definitely something that’s happening right now. Before we get into our panelist discussion, I want to just kind of get an idea of some of the headlines that are happening right now.

We’re hit with them all the time. COVID antigen firms, nation-state actors are targeting vaccine makers, data sharing apps are leaky. Health networks are going down, vaccine makers, manufacturers are being hit with attacks. And these are all things that are in the headlines, and there’s more, Turkish government agencies acting in the public works. There’s just no shortage of headlines.

Kind of the biggest we’ve seen is a statement about credible information of an increased in imminent cybercrime threat to the US hospitals and healthcare providers. So, this is something that is eminent, and it is happening.

But what we want to do now is we want our experts to kind of separate the hysteria from everything else and give us an overview of what that should be telling us. So, maybe, Jeff, you can start and just give us a reality. check.

Jeff Horne, Ordr:

Sure. I mean, it’s not as broken as it seems. I think a lot of the hysteria around it is basically due to an increase. You know, because we absolutely have seen an increase in either, you know, targeted attacks against health care organizations, or no simple attacks like phishing and spam, working on users as they’ve kind of migrated to the home.

We’ve also seen organizations, healthcare included, stand up, remote services on the edge, in order to like Remote Desktop Services in order to facilitate a remote workforce.

So, that increase attack surface has done, you know, caused an increase in hacking events. So, you know, while some of the hysteria is overblown, it’s we’ve definitely seen an increase.

With healthcare, in particular, I think that we’ve seen, you know, obviously, like policy in terms of like cybersecurity policy, IT procurement policy, kind of go to the wayside in order to bolster patient health, patient care and a pandemic.

So, you know, one good example would be, you know, devices that make their way into the organization.

Usually, go through some sort of procurement process with clinical engineering, IT, operations may be security. And, you know, obviously, when you’re trying to get ventilators that are transitory into an organization as quickly as possible.

You know, not only are we saying, in healthcare an external attack surface, but absolutely, an internal attack surface increase as well. As you see migration, you know, more focus on more threads, you know, on patient care, patient health during a pandemic, versus cybersecurity, security awareness.

Becky Bracken:

So what do you think about that? Tony, are you, are you in agreement with that generally?

Anthony Reina, Intel:

Yeah, absolutely, I mean, I think some of the things that I’ll be talking about are actually potentially new, new attack vectors. So I’m kind of a pair of coming in. But, but these are potentially unnecessary new vulnerabilities to open up a lot of possibilities. Particularly in the cloud where we need to have, really, these distributed datasets over the world because this is a global pandemic. And it has to be solved with global datasets.

OK, let’s get into sort of, the most pressing, which is the human cost. We’ve got connected devices inside of our bodies, we’ve got devices everywhere, putting our data all over the place. What is it that we need to know about the human harm that can come, and what mitigation efforts need to be taken to protect human safety?

Jeff, just to start with you from a device, safety perspective.

Jeff Horne:

Obviously you know, we’re seeing more and more medical devices go through, kind of transformation, from a development perspective, in terms of adopting things like security development, lifecycles

You know, in the sense of gotten devices that are inherently vulnerable, right? Some of the stuff doesn’t even pass the smell test in some cases from a security perspective in the sense of, you know, hey, I’m going to have an implantable cardiac defibrillator in my, you know, my chest that has Bluetooth and connectivity that, you know, I want to make sure that that’s at least hardened.

So, you know, we have seen device manufacturers adopt that, it has just been extremely, extremely slow.

So, you know, we were seeing, you know, medical devices, we’ve seen them rolled off the line with vulnerable operating systems.

You know, Windows seven, Windows XP, SP zero are kind of commonplace within healthcare and also these IOT devices They don’t have a life cycle of 1 to 2 years. You know, you buy an MRI machine that has no Windows XP, SP zero on it, that could live in the network for 10 years or more. And it’s extremely expensive. And obviously there are patches available for it.

So, we’re seeing more security awareness around, you know, healthcare, healthcare devices, but, what’s really interesting, when, when I started really getting into healthcare security, you know, you started seeing kind of a modulation of, oh, an attack vector, a vulnerability or an exploit, married with a device risk, and, you know, something simple. Like, you know, every time that I saw, you know, oh it’s a denial of service vulnerability, you know, great.

Another one that’s a serious vulnerability when you’re starting to talk about a device that’s connected to a human right potentially lifesaving device. So you get these things where you know these vulnerabilities are having to be modulated by the device and how the device is typically used. And sometimes that’s a very organizational specific thing. So a low priority. Denial of service attack could have fatal consequences.

You know, from a device and organizational perspective.

Becky Bracken:

Is there any kind of push to standardize security across these two, but I mean, is there a body or a group that’s working on some sort of basic security standards for these things?

Jeff Horne:

Uh, There are, but, honestly, I would say that Security Standards are the basic foundation model, right? It’s the, it’s the lowest level of security that you know you could possibly have. Right? It’s one of those things where when regulations are put in place, you’re basically saying, Hey, I I go to this minimal level without being fined for being just a … wrong. So, when I look at security model, security frameworks, or, you know, you know, particularly like device security, It is absolutely the bare minimum. I mean, they’re talking about things like, “Hey, we should encrypt our data,” like, of course, like, welcome to 30 years ago.

Becky Bracken:

What do you see, Tony? And, from a physician’s perspective, it probably might even be a little scarier from your position.

Tony Reina:

I like what Jeff said about the the kind of the standards and kind of the base kind of thing, if you talk about things like HIPAA. It’s really policies, rather than, like, prescriptions of, you know, here’s exactly what you need to do. You need to use this standard with this, and, you know, this machine with this.

It’s really, tell us what your policy is, and that needs to meet these kind of minimum or kind of requirements. Some of the things with device management have been really interesting, but what I’ve been seeing is, if you think about back a few months ago, and we still have kind of, this, this question about, the number of ventilator is out there. So a device like, a ventilator is usually not a connected device on the hospital network. It’s usually something that sits by the patient and it is something that the medical staff has to go around and adjust because the patient gets better.

What we saw with the pandemic was that, you know, you’d have to basically gown-in and gown-out of the room because they were isolated rooms.

It might take you five minutes to kind of gown-in, five minutes to gown-out, just to go in and make a slight change to a ventilator. And so some of the questions that immediately come up were, “can we do this wirelessly?” which doesn’t mean remote in terms of like in the next building, it means 10 feet outside the door. And these devices weren’t necessarily set up to do that.

So there’s been, you know, a lot of push to kind of back engineer some of these things and try to take some of the existing technologies to do these healthcare wireless kind of systems or remote monitoring kind of technologies building in the security. But it took the pandemic to realize we needed just the security literally 10 feet out the door.

Becky Bracken:

OK, OK, interesting.

Alright, so now we’re going to move on. This is this is Tony’s bit, so this is all really new, interesting stuff. So federated learning and the data silo.

Tony Reina:

Yeah, my apologies to begin with, because I’m probably, if you’re a hospital administrator for IT, you’re probably going to go, I don’t wanna do this, But let me tell you why you’re going to need to do this. So here’s the basic, here’s the basic issue.

We’ve got so much data out there for health care worldwide.

And I didn’t even mean healthcare data. You can talk about any sort of data that’s out there. There’s a lot of data out there and data scientists want to get access to this data.

The problem is that even though there’s a ton of data, it’s usually siloed.

And so what happens is that you’ve got, you know, lots of data in New York, and lots of data in Mumbai and lots of data in Moscow.

And none of this data can actually be centralized. And that’s the typical Data Science playbook, is that you get all the data centralized. And then you have your crack staff, kind of go at it and try to come up with this supermodel that detects something about … or whatever the disease is. We data silo problems or for a host of reasons, privacy and legality, obviously, in the US, there’s HIPAA laws around healthcare information.

EU has things like GDPR, which are even though a broader extension of that, general privacy. Sometimes the data is too valuable. You’ll hear, the kind of catchphrases “data is the new oil.” If you have oil and you don’t know how to price it, you want to get something out of opening up this data and, you know, if it’s going to help. And then sometimes, it’s just, the data’s literally too large to transmit.

So if you’ve got a petabyte of data, it’s going to be prohibitive to actually transmit that data to some central, you know, bucket up somewhere in the in the cloud, or wherever it is. And try to centralize all this data.

So for those reasons, it’s really hard to get to those data, those datasets and come up with a model that would work as well in Midtown Manhattan as it will in Midtown Mumbai or midtown Moscow.

And that’s the model you really want.

So the next one, we’ll go with his kind of talking about what this federated learning is. And it has nothing to do with the federal government has nothing to do with government in general.

That was coined by Google, because they ran across the same issues and what they wanted to do in the original paper, this was about five years ago, or so they were looking at your cell phone. So you’d pull out your cell phone, And you’d start to type a message to your loved one. And they would want to do predictive typing.

So, you’d be writing. I’m going to the store. Do you want me to pick up a?

No blank. And it will try to fill in whether you maybe want it milk, or whether wanted eggs or whatever the next word wise.

Google realized that it wasn’t really privacy sensitive if they were literally sending your IM’s up to Google and having some Google data scientists read all of your IM to come up with what that model should do. So they ended up coming up with this concept federated learning. And the idea is that you’re actually not going to move the data anywhere. The data just lives where it lives on your cell phone.

When you, in their case, plug your cell phone in at night, and you’re on a Wi-Fi connection, they can tell that, and they can say, OK, I’m going to train a neural network on your local data. And I’m going to send the model out. So the model moves around, not the data.

And that’s a much easier kind of thing.

And then basically, the models plural come back now from every user that they’ve trained on and you just have to come up with some way of getting a single consensus model. You know, Jeff trained it this way, Becky trained it this way. Tom drained it this way, I now come up with this global model That’s a conglomerate of all of there’s an average of all of theirs if you will.

So this sounds great. Data never moves. So hey, we’re great for privacy, right. Or great for security my data in there is never going to move, so I’m going to open up my hospital data to this, because I’m safe, right?

This is where our, my research kind of comes in, is to look at, well, what sort of security models and security attack vectors, do you get by doing that. Because the data doesn’t move, but is there anything else that, is an issue?

One thing is, now you’re moving models around.

And if you’re a company, and you want to create a model, that’s your IP that’s, you know, that’s why you’re in business. Is to create this model that you can then sell or use. Now your model’s exposed.

Is there a way to protect that so that you put it out there, and somebody doesn’t just steal your model and have a great start up?

And now, that’s great.

Even more nefarious, you could actually start to poison models.

So, you know, I might trust Jeff.

But if Becky and Tom are sending that back models and they’re doing the right thing, Jeff might not knowingly or inadvertently doing the right thing. And he might send, send me back something that prevents me from ever training a model correctly. And now, I spent all of this time trying to get this global model that never actually converges.

What’s even worse is that Jeff could do the same sort of thing, and cause Becky and Tom to try to memorize more of their data in order to make their local models better.

And you could do something called a model inversion attack. And so these are actually things that are in the literature. So I haven’t even made these up. These are things that I can point to you, that the academic researchers who showed these attacks, the cell phone image, the image that you see there is actually from the paper, one site was able to steal that image from another site. As you can see, the basic issue there is that the model itself has information about the data.

And if you are crafty, you can actually get data from models by doing that.

I think the final one, yeah.

So obviously, I work for Intel. I’m not an official, is health spokesperson, so everything I’m saying today is really just me talking about my own research, and talking about, You know what I think is important. But, I will say (shameless self-promotion) that at Supercomputing 2020, which is going on this week, We actually have a demo showing federated learning using an Intel security technology called SGX Software.

It’s a trusted place in compute and memory where you can basically have what’s called an enclave.

And the enclave is only accessible by you, so I could run something on Jeff’s computer and Jeff couldn’t actually access it unless he had my key to access it. So I can run it an untrusted computer it’s protected.

And I can basically get around these issues that I talked about with model poisoning with trying to steal models. We’re trying to, you know, trying to memorize data, things like that.

So, I encourage you to kind of go to that site, And, yeah, I kind of open it up. I know that was a lot, so I know.

Becky Bracken:

It’s good. And talk a little bit about sort of the practical applications of this. I mean, this is tackling big problems.

Tony Reina:

Right, absolutely. So we show in this one, we actually had a nature paper that was published a few months ago on looking at why federated learning would be useful in the healthcare space. And so we did it, and this one, this example, you’re saying it’s actually a brain tumor segmentation. So this is literally a neural network. So deep learning model and AI model that is taking MRIs and is trying to imagine if you had a crayon and you were trying to color in the section that’s the tumor, that’s the brain tumor. You can imagine how important this would be to have something that we just label areas of an MRI of the brain where tumor lives.

But what we were able to show with this paper is that, if, in this case, we had a dataset that was open, that was, you know, we could get all the data is kinda centralized.

We can compare how it trains in a federation. The federation was able to get to 99 point.

But with the Federation, we never had to actually move data around. We can keep it in the original hospitals. You know, we knew what the original places were.

And we were able to show that, basically, if you were to just train at a single hospital on just the local dataset, the final model with all the data was 17% better, on average than if you just trained on any given place. And even if you thought you were like, the best hospital in the world and had the best golden dataset, you could still do a little over 2% better by joining a federation.

There’s, there’s a big data science kind of mantra that, the more data you get, even if it’s not necessarily fantastic data, you learn enough that you, you kind of bring things up, It’s kind of like on, you know, who wants to be a millionaire?

You poll the audience, and even if the audience isn’t an expert, the collective knowledge of the audience, if you look at the statistics, are like, they’re usually going to get the right answer, because not everybody has to be an expert.

You can get a bunch of poor predictors, put them together, and you’ve got actually a super predictor.

Becky Bracken:

Wow, That’s super interesting. Now, Jeff, I know you have some, some thoughts on this, too. What are you seeing here?

Jeff Horne:

Of course, I mean, I think it’s incredibly, know, incredibly interesting that we’re, we’re at this level in healthcare, in the sense of, I’m a huge proponent of, of data protection and data privacy.

You know, I think the federated learning model is, is really nice when it when applied to things like HIPAA problems, GDPR problems, But it’s just, it speaks to kind of the diverse nature of the health care network.

You know, you have lifesaving equipment.

You’ve got no ventilators that are now trying to become wireless because, you know, and pandemic you want to be able to operate them outside of the door without gowning-in.

Know, robots that are conducting surgery, like, well, a lot of the surgery centers are, you know, leaning on robotic surgeries for, you know, lower invasive procedures.

And, and, yeah, being able to do, do, being able to do federated learning, and applying AI models to something, you know, almost as important as tumor diagnosis for likely a lifestyle, but sometimes that is incredibly important.

But the old models of, know, from, from an IT perspective, or from an engineering perspective of, oh, yeah, let’s take all these imaging data. Like, let’s take all of this, like, really private and potentially sensitive data, and send it up to a cloud to have a giant computer processing on it?

No, I think it might have been the easiest way to do it. It’s not the securest way to do it. And it’s definitely lends itself to having privacy issues.

But then what we’re finding out is that if we put our privacy hat on, there’s actually a better model for them being able to do it at the edge, right?

As Tony mentioned, you know, it’s better to do predictive learning into pieces, and in this, these types of scenarios, very similar to how Google does predictive text analysis on the phones.

But it also brings up these very kinds of academic, or I would consider them academic. I mean, I know groups of security friends will just sit there, you have some cocktails and talk about adversarial AI. Right. In terms of like, hey how could you mess with models. How could you poison the data? But, you know, these are things that people are thinking about in terms of Machine learning and AI.

Tony, I think you mentioned, like, Byzantine Statistics. Right. Being able to find the covariance is you know between between model sets in order to make sure that, you know, you’re trusting that My input is good. But You’re also verify versus a statistical statistically?

That’s right, Yeah.

Tony Reina:

I mean so with, with Byzantine and this is not just in, know, in this field. Is not seen, has been talked about for 30 years. But the, the, the basic idea and I don’t want to keep harping on Jeff as being the bad guy but, you know, imagine I, please, just my adversary, and I want to ask, Jeff, hey, can you take this thermometer and go out and measure the temperature for the next 10 days, and Jeff does? Jeff says, I’ll do that. And I say, I don’t want the actual temperatures just give me the average temperature.

And Jeff comes back 10 days later and says, “The average temperature is 125 degrees Fahrenheit.”

Now I might go, OK, maybe he’s living in a really hot place, you know? But, you know, that seems kind of hot, you know? And so, the point is, for adversarial stuff, without some sort of adaptation, without some sort of way of, Jeff saying, Here’s the number. Here’s proof that I actually measure the temperature over the last 10 days, and this is what I did, because I don’t know what the numbers are, that he actually measured SGX and trust exe execution environments, And these kind of security models have this idea of attestation in them.

So you’re giving the, I’m giving Jeff an algorithm code that I want him to run. And not only does he run that code on his data, but he has to prove you have to send me a receipt that says, I ran the code on the data, still. It’s not. You know, I still got to believe that, you know, everything worked well.

But at that point, you’re trusting the hardware itself. You’re trusting this cryptographic hash that comes back. So there there’s always a root of trust, but at least you’re not having tech implicitly.

Trust, Jeff, that says it’s 125 degrees on average around my house.

Becky Bracken:

What does all of this asking of the hospital administrators, you brought up at the beginning?

Because it seems to me you’re asking them to completely blow up the old way of doing things and re-align. Or are there incremental steps that you know at the edge that we can start taking that might work OK, with some existing infrastructure?

Jeff Horne:

I think from, you know, from a security perspective, I do see, you know, more and more security teams, CISO teams being involved in the procurement process, I think. It’s really good to have trusted vendors, right, vendors that you know are going to be doing it right?

And that you can verify that they are doing it right, But you know, vendors like an Intel in the sense of, you know, being able to do the trusted computing side of a Secure Enclave.

I know, you know, some people don’t understand what that means, but, you know, from a cybersecurity perspective that, you know, that means that they’ve actually thought about not only, you know, data privacy and data protection, as part of their, kind of, you know, their ecosystem for their product.

It’s very easy in healthcare. It’s for me to pick on, you know, certain devices.

We know, I mentioned MRI machines that are coming off the factory line right now with Windows seven, Windows XP at Speed zero, running Web servers, FTP servers, and file transfer protocol, so that right here, and can never be patched.

And robots, these, these robots, I’m not gonna pick on the individual vendor, but that is something so cool. And I mean, I think, I think robotic surgery is absolutely, incredibly interesting.

I love to have discussion around, like, AI learning, procedures off of robots, like in the sense of, you know, you know, figuring out how to do, you know, remote surgery. If you were able to have the doctor remote into, into the machine, I think there will be incredibly interesting.

Um, but, at the same time, I’ve seen several robots, the way that they do their support ticketing system is literally go to meeting what we’re on right now. You know, what, I mean?

So, there needs to be more security people in not only the development of these devices, but, but, also, know, the procurement of these devices. And I think that the healthcare organizations and health care industry is, is really starting to learn this in the sense that, you know, a lot of these devices, you know, are not inherently encrypted.

More and more are just simple, and I would even say, dumb vulnerabilities happen on these medical devices every day. I mean if you just read some of the headlines. It looks like we’re back in the seventies and the sense of vulnerability disclosure, right, Where it’s like, hey, if you go to this website, you can change the password on the device.

What, or, hey, it has a root username, and password, or username and password that has no root or system level access to the device that can’t be changed, or a private key that static across every device that was ever made for a particular event.

I mean, these types of things, our, I would say egregious and just kind of security in general at the enterprise, or, you know, Fortune 500 companies. But at the end of the day, we’re seeing these, you know, come down into all sorts of Android devices, you know, cameras, security systems, elevator control systems, and, unfortunately, medical systems.

Wow, so, Tony, how do we get the House in order? I mean, where, where to hospital administrators? Where should they start? Where, where’s?

Tony Reina:

Yeah, I think the first one is just going to be their policies, that this is a, this is a proactive space. This is something where you can’t just be reactive. You actually have to be going out there, searching for those sorts of things, and so even on the technologies that we have, you know, we’re, we’re proactive about saying that security is an evolving, you know, kind of technology, It’s not something where we’re going to be finished. It’s just something where, OK, let’s try to see how we can manipulate that. Is there any thing that we can do with that? So you’ll hear about, you know, this has been broken, or this has been broken, or this has been broken. We’re actually going out there and trying to figure out where those holes are and continuously plugging through them. The same thing has to be said on the hospital IT administrator is don’t assume that your system, as, you know, is finished.

Keep doing audits. Keep doing regular maintenance. Keep doing updates. You know, don’t really wait, you know, to have these things happen. Have policies in place to do this?

Some things that can help with, with that.

There, particularly on the hardware side, a lot of companies, including Intel, are really getting into security all the way up from, let’s say, hardware bios, kernel level operating system libraries, you know, all the way up. And basically, it is a root of trust all the way up down to, you know, something that’s embedded in the processor.

That’s a key that, you know, is unique to that, and so you can say, is my if I’m running Windows, is my weight, hence my window has been hacked.

Is it the same hash of Windows that when the, the biomedical engineer tech, you know, certify this machine?

Has anybody put in some weird library or something like that into Windows? And I think, I’m running, you know, this, this kernel, But it’s actually this other kernel. There are now technologies that can basically follow up the stack and say this, hash is good sashes good, sashes good.

So, by the time that machine boots up, you’re like, I at least trust that the operating system I’m using, is the correct opera, is the one that has been approved, and hasn’t been somehow hacked.

Jeff Horne:

I think a really good example of, know, being able to, like test your processes is this whole ransomware thing, it’s any healthcare right now.

Mainly because it’s not a, It’s not a malware problem. The ransomware that’s being used is relatively simple. These organizations are truly getting hacked by malicious actors.

They are inside the network distributing, you know, distributed, distributing malware and malicious binaries, but, you know, one of the things that we see is obviously from a proactive standpoint, the solution to Ransomware is a robust backup strategy as a backup strategy that includes, you know, backups every day, in some cases, offline backups, that hackers can’t touch without having physical access to the environment.

But, you know, a lot of these organizations are just trusting that these things are happening, right.

They have a robust backup strategy on paper, and a lot of these health care organizations, you will say they have a robust backup strategy. They have cybersecurity insurance that, you know, they went in, and they met the minimum standard of, hey, we have a backup strategy.

But they’re finding that, you know, some of these help, your organizations that even have a backup strategy, the price of the ransomware, is actually less than the cost incurred by the downtime to take that data that backup and replicated across their entire work. Which is exactly what the malicious actors are.

Becky Bracken:

And they change their pricing to reflect that, right?

Jeff Horne:

Exactly. They’ll change their pricing in order to say, hey, if they could restore from an offline backup. But, you know, it’s gonna be, it’s gonna be more than $100,000 from a, from a downtime perspective and, you know, five days to take all of these, you know, offline backup systems and then replicate it across the network.

And yeah, I mean that’s, that’s really why healthcare is being targeted right now because they’ve got an increased attack surface.

You know, there is a definite lack of, you know, and I think it’s the right thing to do, honestly. Like, it’s, there is a lack of cybersecurity awareness. Things are being repositioned to Save People’s Lives, but at the end of the day there is this increased risk.

That opens up to compromise, particularly for rates where operators that are, you know, taking data, stealing data, destroying data, and then ultimately asking for you, 25 to $250,000 for an initial payment?

Becky Bracken:

I’m hearing you, say that you don’t necessarily advise against just paying, paying off the ransomware, and get any. I guess my form it as a question. What would your advice be if you are sitting in a hospital, and you see that this has happened to you, and you construct a $100,000 check and make it go away. That it doesn’t go away, right?

Jeff Horne:

It does it right to be, first, the payment is really to just get your data back. And there’s no guarantee that you’re gonna get your data back.

Know, the three ransomware groups that are targeting healthcare organizations right now and all three of those run slightly modified versions of older pieces of ransomware once they get into the network.

A lot of their tools, tactics, and procedures, are based on getting into the network either through paid things. So, we see operators literally selling their access to these organizations, to the ransomware operators.

There, there’s been several instances of hackers on, you know, something like raid forums on, you know, the Tor Anonymous network.

Selling remote desktop access for a thousand dollars to some hospitals. So it’s like, Hey, I’m in this hospital. I didn’t brute force this remote desktop protocol, the system on the edge. It has access to 20 terabytes of information. I’m selling it for one thousand dollars to ransomware operators that can turn that into $200,000.

And no, I’m not advocating paying the ransom at all, just because there is no guarantee that you’re going to get your data back.

But it is a very, very difficult situation.

When you have it, when you’ve not done anything from a proactive standpoint, if you’re, if you’re sitting there and you don’t have a robust backup strategy, you don’t have AV, you don’t have insight into the network. You don’t even know what’s on your network.

In some cases, if you don’t have a real-time asset inventory of asset management system, it’s very hard to 2 to 1, say, what could I have done in order to prevent this, Because there’s several proactive things.

But you’re in a literal state of, it’s going to either take me millions of dollars to rebuild my entire business, or recover from a backup from six months ago because I don’t have a robust backup strategy or just pay the ransom and do it.

At the same time a lot of these organizations do have cybersecurity insurance that makes that payment a little bit more palpable.

Now, what do you think?Tony, are you, do you have thoughts on this from the hospital talking to Nashville administrator through one of these scenarios?

Yeah. Not on that sort of scenario.

I guess I would kinda pivoted toward the actual data’s data itself, you know as backing up to some sort of secure storage is definitely, you know, that the right, the right thing to do.

But then we’ve even seen kind of potential vectors of, you know, what if I want to change one MRI if the, if it comes off the device, how do I know that it’s still mister Smith’s MRI, and Jeff was talking about adversarial attacks. What if I’m able to change mister Smith, MRI slightly. So that maybe even a human being can’t necessarily tell the difference between it, but one of these whiz bang AI models ends up saying the tumor is not there anymore.

Because I’ve I’ve done this and this is this is this adversarial kind of kind of imaging. So I know that there’s part of the Dicom standard, which is the standard for, for medical imaging formats. one of the thing is when one part of it is to actually have a hash of what the image was to show that it is the actual image.

So, that’s part of the thing possibly to think about, is, is, you know, backing up things, but then having maybe even a separate store that, that says, When I, if I get that backup coming back, that no one has actually fooled around with individual dataset. Because I have a I have a hash of what it should have been, you know.

Not the image itself, but the hash of what, the, what the image we’re supposed to be and that gets us back to that adversarial kind of, You know, learning aspect.

One of the cruel tricks to these ransomware operators, I’ve seen before, is, know, one of the things that they do, first and foremost is I get them in the box, and then they’ll turn off logging, right? No law, no logs, no crime, right?

So, if, if they’ll turn on logging in terms of, like, PowerShell login and things like that, they’ll turn off the antivirus, and then they’ll start attacking the system themselves. Right? So they’ll take like for Windows, they make shadow copies of important system file. So when you damage your windows and you go into a recovery, state, you’re not necessarily having to restore from a full backup, you’re literally just able to take your, you know, your system DLLs and restore them from a known good state, from shadow copy but then you have things like System Restore Points, right?

And so one of the, the tactics that I’ve seen, at least in one ransomware as a service operator families, they’ll actually go in and first like they’ll just destroy all the system restore points.

They’ll just turn off system restore and delete all of them.

And, and then you know, obviously, we removed the shadow copies of the files, so they can actually destroy the operating system as well if they wanted to.

But some of them actually take a system restore point, then backdate it two weeks in the past. But their system restore what they took has all of their binaries and all of their unilateral luma tools on it.

So, know, I’ve seen network administrators be fooled by this in the sense of like, Oh, guys. Like we’ve got ransomware, but we’ve got good system restore points from three weeks, and so let’s not pay the ransom right, now let’s spend 2 or 3 days, you know recovering from a known good system, restore point, and now they’ve wasted 2 or 3 days.

It’s pretty mean.

It’s pretty good one Yes, on Twitter or something where the all the printers just started spitting out the brands and notes. That is a favorite, because at the end of the day, like, I’ve honestly been infatuated with this, the moment that you can write a piece of code or that you can do something on a computer that moves something that is incredible to me. Like, that was one of the things that really, you, know, like, really captivated me from a programming standpoint.

And, yeah, so of course, they’re popping up the windows, if there’s a GUI interface or a monitor interface on the on the device, of course, they’re sending e-mails to the IT administrators and to the entire company through distribution lists on the internal e-mail server. But absolutely, there’s several ransomware as a service groups. Zeppelin was was one of these that all of a sudden the printers just started printing out hey your, your network is compromised. You need to pay And yeah, that of course is something that gets leaked on Twitter.

Because it’s cool, I mean all these printers spitting out horrible visual Sorry.

Well, it gives you a physical copy of something that the nurses’ anybody that weren’t there could walk away with, Because, you know, from a HIPAA perspective, most hospitals are HIPAA and Hitrust compliant, every mobile device that you know, a doctor or a nurse brings into the facility has mobile device management on it. Just to make sure that, you know, they can’t take pictures of their screen, if they do, then, you know, the IT organization might not be looking for it, but they could be at least materially aware of it if it starts to occur. So one of the first things they do, and a ransomware event is like, Hey, don’t talk about this, Right? Because, you know, they let’s cover this internally because we don’t want the shares of our, you know, company going down. Because we’re going through a breach, right? Let’s do that.

But, yeah, if you’re sitting there with 8.5 by 11, you know, a one paper walking out of the building, that’s, you know, that’s going to bypass any sort of mobile device management or, you know, any sort of, uh, data loss prevention tool sets that you might have in place in order to acquire down the external breach exposure.

All right, well, we’re already knee deep into our next, the healthcare ecosystem. And I think we’ve already touched on a lot of the devices and the attack services that may be overlooked. And if there are any others you think we’ve missed, let’s talk about those. But I also wanted to talk about the evolving role of the engineer within the healthcare IT space.

So, if you want to start with that, Tony, and give us your feedback on those.

Yeah, sure. Yeah, so, I mean, the, I think it’s becoming more and more, you know, kind of evident that it’s, it’s connected devices.

Tony Reina:

So there are a lot more devices that you’re going to have to kind of wrangle and kind of have to have security policies around.

You know, there’s what I’ve seen in hospitals is a lot more of wireless connections, just because it’s really hard for the infrastructure to support putting things into walls.

And, you know, moving actually wired connections around, this, again, opens up the attack vector for, you know, Wi-Fi protection. So you’re really going to have to have these sorts of proactive modeling around how do I protect these WI fi interfaces? How do I protect from somebody from hacking into these sorts of networks?

Obviously just basic TLS encryption is going to be standard for everything.

But things like even, I put in, for example, there’s one of the things for an encode that was along with these, these, these ventilators was having just dashboards and nursing stations, and having kind of wireless had wireless displays. Or, and one of them that Intel had worked with, the company, is called Sick Bay, so they had to have all of these devices that were connected in hospitals. The EKGs. The pulse ox is the there.

Blood pressure, you know, all those things that you end up getting plugged into, it just opens up the space enormously and you have to add in that security of if I put in a device to a new device, to the network, I plugin, a new sensor.

The idea is and what we’ve seen is you can basically have that device up and running in, like four minutes. But it’s going to go through this security procedure where it announces itself it negotiates with the network. I am who I say I am.

And this is how I’m going to say, this is how I am. It’s not just I I plug it into the network, and oh, I now have access, I have to prove to the network who I am. I set up my TLS certificates back and forth, and now I’ve got an encrypted authenticated way of connecting to that.

That’s now going to be bread and butter for a hospital engineer is to is to be able to deal with this health care internet of things, where you’re going to have all sorts of devices being either plugged in or being wirelessly plugged into your network and handling these security certificate exchanges. And these proof of, I am who I say, I am and I’m not authorized to be on this. This network aren’t going to be just bread and butter.

What do you think?

Jeff Horne:

So, I mean, I agree with Tony.

I yeah, I would say I’m more pessimistic in the device manufacturers getting there. I think we exist in this really kind of opportunistic time.

Where are these devices are natively encrypted? They’re not using TLS. And I’d say that was a very opportunistic time because, you know, I work in an organization that I’m seeing all of this unencrypted traffic. And I’m, I’m, we’re using it in order to create machine learning models on the device and healthcare in order to figure out, you know, is this a MRI machine by GE? Is this a vigil on camera? And were largely able to do that through an encrypted communications, particularly in the way of like the way that it reaches out to get updates, service manuals, things like that.

So, I, you know, I definitely see device manufacturers going the way of encrypting all of their data.

But from an organizational perspective, I think that a lot of healthcare organizations are learning that their asset inventory is not up to date.

Particularly like one of the exercises that we’re seeing people walk through from a ransomware perspective is, you know, let me pick on the MRI machine again, that MRI machine, what runs Windows Southern, knows those new security patches are available for it and it’s running SMB V one and transmitting over daikon these images.

That’s vulnerable to a five-year-old exploit.

Know, having your MRI machine, B the pivot or the distribution point for ransomware is very difficult because you can install AV, you can install an endpoint detection.

So, what we’re seeing organizations do is say, Hey, look, we need an asset inventory that includes everything, right? When I was an incident responder, is to walk into organizations and, you know, when they told me that they had an up to date asset inventory, I would say, OK, well, where’s the Samsung television that runs Linux? Where does that on your asset inventory? And what is a threat model for it? Right, because, you know, televisions are all honestly easy to pick on, as well, because, from an attack surface perspective, you’re thinking, OK, well, I gotta plug no, up a keyboard into a device or, you know, it’s going to have Bluetooth or wireless, like most TVs can be hacked with the IR remote that you can go get from bed bath and beyond. And a lot of times, it works outside of the organization, as long as you’ve got you’ve within 25 or 30 feet.

But, what we’re starting to see in healthcare in particular is that once they start gaining visibility, right?

People like to talk about shadow IT, but there’s definitely a shadow IOT problem in the sense that you have devices that break HIPAA overtly. Just from a privacy perspective inside of healthcare organizations like Amazon echoes right there, most hospitals right we have a strict Amazon Echo policy.

We don’t allow these things, and yet we find them in every right, and you are one Alexa away from basically sending potentially sensitive information up to an angles on web service that’s not potentially not HIPAA compliant or hitrust compliant.

So, we’re starting to see organizations not only do asset discovery in real time, so using something that’s passive on the networked in order to create an asset inventory that you could say, right now, what is this device? Right. If I bring in my cell phone, I need to know to the minute that that thing, you enter my network because it could transact some malicious activity.

And then being able to have automated controls that are plenty of identity focused or risk focused in the sense that is this device particularly risky because I’ve never seen it before. It’s not been updated and it’s a jailbroken hyphen, right.

Let’s not let that onto my corporate network, But we’re also seeing kind of bottoms up security focused segmentation.

So, know, when an IT organization looks at segmenting their network into logical zones, they typically do it in geographic ways, in the sense of, like, let’s do a v-lan per floor, and let’s do a self net per business unit.

So, HR is on the third floor, and, you know, here’s the VLAN, and here’s their supplement, That’s incredibly difficult, and I’ve seen a lot of organizations fail.

I’ve also seen, you know a lot of organizations that have succeeded, but it took years longer than they expect the time for them to do it.

So, being able to kind of take a security focus bottoms up approach in the sense of, let me, let me figure out what’s on my network, right? And let’s start calculating the risk of certain devices, right? Does this MRI machine need to talk to the pack server, and does the pack server absolutely need to be able to talk with, you know, an external resource on the internet?

Being able to figure out what these devices need to connect to is really important, but it’s also very easy. A lot of these IOT devices, they’re very, very deterministic. Right? They only did they only connect to maybe 4 or 5 or 10 machines on the network.

They don’t have a person behind them, usually, you know, surfing Facebook so that they’re not the random, right. And so being able to take that and say, hey, I’ve got an MRI machine that runs Windows XP, SP zero, It’s going to be here for the next seven years.

There’s no reason why, know, just an HR needs to be able to hit the IP and touch the external surface of this MRI machine because it only needs to talk to four machines and the network. Let’s create a logical segmentation policy that makes sure that that device is useful and that its packets can flow to the right devices. But that Jeff and HR can inadvertently touch it. once you go surf the web site, I go get a piece of malware that tries to, you know, install ransom, world device. So we’re starting to see kind of like that bottoms up approach. But it’s 100% predicated on finding out what’s in your network at the real time.

Because, from an IT perspective, you can’t manage what you can’t see, and from a security perspective, you can’t control and protect what you don’t know, right?

So those are the kind of kind of Paradigms,

OK, let’s just go to the Q and A We’ve got some really good questions, and I want to make sure we have time for them.

OK, here we go, Tony. Is there a health device certification that must be met by FDA to be used in the healthcare community?

Tony Reina:

I’m sure there is, I’m sure there’s actually several of them that, that were happening there by the, by the FDA to be be used by it, so yeah. I don’t know what the actual regulation would be, really depend on the advice that you’re, you’re talking about.

For instance, we did a position paper, I think it was last year, they are starting to consider software as a medical device, which is kind of interesting that, you know, it’s an AI AI model that’s completely software based, and now looking at some sort of evidence based FDA clearance on those.

And there have been, I want to say around 75 algorithms, AI algorithms that have already been FDA cleared as software as a medical device. It could be something as, I won’t say simple as that. But it’s something that, you know. It’s, it’s basically looking at how well the, this algorithm performs on real data.

And it’s almost like you’re doing like a pharmacology kind of test. It’s prospective studies and retrospective studies. You have to say, here’s the results of the study.

Once you go down into actual hardware and talking about medical devices, there are all sorts of additional regulations on even things like FCC regulations are around, you know, the spectrum that you’re using and making sure that it doesn’t interfere with existing devices or pacemakers or things like that in the hospital.

Becky Bracken:

What do you, what do you, do you have anything to say on that, FDA regulations and device?

Jeff Horne:

Yeah, I mean, we are seeing more and more, I mean, I would consider the novel, but like FDA approving, you know, AI models, right? Or, you know, very, very, kind of, unique and complex computing platforms, which is great to see. But, but, ultimately, you know, healthcare is larger than the United States. And, you know, we see healthcare organizations that are like, multinational that they might absolutely use that, from a policy perspective, that this needs to be FDA approved, but it’s not necessarily something that, at least, I’ve seen that this absolutely dropped the requirement. I seen it in the purchasing process, in terms of, like, will pick this, because it’s an FDA approved, versus these other things.

Tony Reina:

I will say there’s, there’s a, there’s a funny one that I think it’s IDX, there’s a, there’s one that’s, like, 99.9% of the AI approved algorithms are, are FDA cleared.

There’s one that’s actually FDA approved to diagnose and it’s it’s basically for for eye disease. And what’s really interesting about that one is that, in addition to all the regulatory compliance and everything, they were so confident of the algorithm.

The algorithm itself has its own medical liability insurance on the algorithm. Just like a, like, a physician would have. Said something, I mean, that is not typical of the field, but, but there is one where the actual algorithm is allowed to make a formal diagnosis, and has insurance on that.

OK, here’s another one, Tony

How do you train your model if you’re not using the federated data? Are you using the data at one, albeit the largest site?

Tony Reina:

Yeah, that’s a great one.

So, on the ones that we’re doing, we’re really doing the academic kind of work to show equivalence in the two techniques.

So particularly in the brain tumor one, it’s an open dataset that it’s like 400 MRIs that have been in this contest for like the last 10 years. So this, you know, all the academic researchers have been trying this for 10 years, trying to do a little bit better. Each time you have access to all 484 MRIs and so what we did is we worked with this was out of the University of Pennsylvania, that is kind of one of the curators of the site.

And they were able to say, well, we could not tell us the hospitals where they could say, here’s the original sharing. These scans went with hospital A, hospital B, hospital C, and so we were able to basically show, OK, if we, if we trained like a federation, could we get to those? Those same things? And so that was the initial thing. It wasn’t actually training at those hospitals. That was a dataset that was open, so we could, we could prove the actual technology would work.

Becky Bracken:

There’s a follow up.

Secondly, you seem to assume that all data are of the same schema at each and every federated right assumption may not be true, and perhaps not easy to maintain.

Tony Reina:

Absolutely. That’s, that’s the critical piece. Like Google wants, it has an easy, but the model has it easier, because the annotation is, is, is the users are actually doing the annotation. As you type your you’re going to say, I went to the school door to get a blank, and you’re filling that in. And so they can do that. So you’re you’re annotating the dataset itself. There’s certainly an issue with data annotation harmonization between sites.

So this is why the federations that we’re working with are hospitals that have already worked together and already have common protocols. So this is, again, kind of coming out with this proactive type of thing. It’s not just, oh, we got some data. We’ll throw it up there.

We’ve got some data, and we’ve all agreed to the same way of organizing that data of making sure that the, the annotations are the ground truth labeling of that data is same.

And so that’s what you’ll see initially is that there’s also a concept of vertical federated learning, which is even more kind of interesting.

Much harder to do, where you’re, let’s say, in a in a in a system, let’s say, I’m, like a financial system, where you’re looking at different data stores for the same individual.

So, I’m trying to track, Becky, through her credit history, and her bank history, and all of these disparate, kind of data stores, and coming up with a single question that I want to ask about, Becky, as opposed to, you know, Jeff, in the in, in this.

So, vertical kind of combines all of these stores, but, again, you, you definitely need some sort of harmonization to be done on, on those annotations.

Becky Bracken:

We are, at the top of the hour, but, I want to get one last question. And, so, everybody, just hang with me just a couple more minutes, because this is an important one for Jeff. And I think it speaks to a lot of the ongoing problems, which is, how do you deal with procurement teams who are frankly about engaging in security conversations, really it out?

And he worded it better than I did but, that’s the idea. Here it is: “How does security teams deal with procurement teams that don’t like to deal with security and compliance questions?”

Jeff Horne:

I mean, I would basically just involve the IT team in an overall risk discussion, right?

I think that the procurement teams if, if they are bypassing the IT, um, you’re bypassing the IT and security teams. There’s always going to be some issues there.

I pick on televisions. But, what’s reall interesting, in terms of, like, these IOT devices, is that there’s just not awareness, that these things are computers anymore. You know, televisions went through this life cycle, where, you know, they’ve been in boardrooms since the seventies, right? And they, they graduated from CRTs to plasma, plasma to LCDs, an LLC, is to Now we have, you know, organic light emitting diode, right?

We have this beautiful televisions, but, but through that evolution, it gained compute capabilities, so these things are running, you know, these things are running, you know, real MIPS arms, and, in some cases, X 86 processors, in order to, you know, not only provide the, you know, high frequency rate, some dithering on the screen in order to make things pop.

But, but, also, you know, applications, Netflix, you know Google plus, Hulu, you know, you name it.

So, you know, just picking on that one example, it’s easy to point to the, to the, oh my God, why is my coffeemaker connected to Wi-Fi? You know, login now, but truly these devices. You know, coffee makers, MRI machines. You know these ventilators is, right, this is as people wanting to now take something that’s been historically offline and peppery online for patient care.

So they’re not having to spend 10 minutes scrubbing and scrubbing out for …, in fact, the patient, Those things all have inherent risk, right?

As an attacker, you only need an IP address, that’s it, right? And it could be internal, it could be external.

The moment that these things get external, it’s the moment that you can have somebody anywhere in the world that can attack it. You’re not dealing with somebody walking through the door. So, at the end of the day, I think being able to have those risks discussions in a calm way but being able to explain it.

And then bring the, bring the procurement people to the table in the sense of, like, hey, we’re concerned about this, Predicated on the fact that a lot of these devices that were computers yesterday, are now full computers. That, that needs IT security oversight. But it needs it in a logical, no prescriptive way, in the sense that, you know, security is not going to be a blocker for patient healthcare.

Right. You know, security could probably push back and say, Hey, on what?

TV while versus TV X, But, no, when it comes down to cost analysis, business analysis, this MRI machine versus this MRI machine is, you know, significantly discounted.

There is a way that you can essentially accept risk but mitigate it appropriately. And so, I think that that last step is the one that absolutely need is needed. And that the procurement discussion says, We can absolutely do this, but it has inherent risks that we’re going to be the indicator.

Becky Bracken:

Smart

OK, well, I want to thank you both so much for your time and your insights today.

Their e-mails are here both Jeff and Tony, If you have any specific questions to follow up with them, or if you have any feedback or comments or questions for me, please reach out.

And I want to let you know that this webinar is going to be delivered to your inbox. And please come check out Threapost’s daily news coverage. And we also have a new e-book coming out focused on this very topic: Cybersecurity and Healthcare. So please be aware of all that. Thank you all again for joining us and thank you to Tony and Jeff.

Suggested articles