The Misaligned Incentives for Cloud Security

Russia’s Sunburst cyberespionage campaign, discovered late last year, impacted more than 100 large companies and US federal agencies, including the Treasury, Energy, Justice, and Homeland Security departments. A crucial part of the Russians’ success was their ability to move through these organizations by compromising cloud and local network identity systems to then access cloud accounts and pilfer emails and files.

Hackers said by the US government to have been working for the Kremlin targeted a widely used Microsoft cloud service that synchronizes user identities. The hackers stole security certificates to create their own identities, which allowed them to bypass safeguards such as multifactor authentication and gain access to Office 365 accounts, impacting thousands of users at the affected companies and government agencies.

It wasn’t the first time cloud services were the focus of a cyberattack, and it certainly won’t be the last. Cloud weaknesses were also critical in a 2019 breach at Capital One. There, an Amazon Web Services cloud vulnerability, compounded by Capital One’s own struggle to properly configure a complex cloud service, led to the disclosure of tens of millions of customer records, including credit card applications, Social Security numbers, and bank account information.

This trend of attacks on cloud services by criminals, hackers, and nation states is growing as cloud computing takes over worldwide as the default model for information technologies. Leaked data is bad enough, but disruption to the cloud, even an outage at a single provider, could quickly cost the global economy billions of dollars a day.

Cloud computing is an important source of risk both because it has quickly supplanted traditional IT and because it concentrates ownership of design choices at a very small number of companies. First, cloud is increasingly the default mode of computing for organizations, meaning ever more users and critical data from national intelligence and defense agencies ride on these technologies. Second, cloud computing services, especially those supplied by the world’s four largest providers—Amazon, Microsoft, Alibaba, and Google—concentrate key security and technology design choices inside a small number of organizations. The consequences of bad decisions or poorly made trade-offs can quickly scale to hundreds of millions of users.

The cloud is everywhere. Some cloud companies provide software as a service, support your Netflix habit, or carry your Slack chats. Others provide computing infrastructure like business databases and storage space. The largest cloud companies provide both.

The cloud can be deployed in several different ways, each of which shift the balance of responsibility for the security of this technology. But the cloud provider plays an important role in every case. Choices the provider makes in how these technologies are designed, built, and deployed influence the user’s security—yet the user has very little influence over them. Then, if Google or Amazon has a vulnerability in their servers—which you are unlikely to know about and have no control over—you suffer the consequences.

The problem is one of economics. On the surface, it might seem that competition between cloud companies gives them an incentive to invest in their users’ security. But several market failures get in the way of that ideal. First, security is largely an externality for these cloud companies, because the losses due to data breaches are largely borne by their users. As long as a cloud provider isn’t losing customers by the droves—which generally doesn’t happen after a security incident—it is incentivized to underinvest in security. Additionally, data shows that investors don’t punish the cloud service companies either: Stock price dips after a public security breach are both small and temporary.

Second, public information about cloud security generally doesn’t share the design trade-offs involved in building these cloud services or provide much transparency about the resulting risks. While cloud companies have to publicly disclose copious amounts of security design and operational information, it can be impossible for consumers to understand which threats the cloud services are taking into account, and how. This lack of understanding makes it hard to assess a cloud service’s overall security. As a result, customers and users aren’t able to differentiate between secure and insecure services, so they don’t base their buying and use decisions on it.

Third, cybersecurity is complex—and even more complex when the cloud is involved. For a customer like a company or government agency, the security dependencies of various cloud and on-premises network systems and services can be subtle and hard to map out. This means that users can’t adequately assess the security of cloud services or how they will interact with their own networks. This is a classic “lemons market” in economics, and the result is that cloud providers provide variable levels of security, as documented by Dan Geer, the chief information security officer for In-Q-Tel, and Wade Baker, a professor at Virginia Tech’s College of Business, when they looked at the prevalence of severe security findings at the top 10 largest cloud providers. Yet most consumers are none the wiser.

The result is a market failure where cloud service providers don’t compete to provide the best security for their customers and users at the lowest cost. Instead, cloud companies take the chance that they won’t get hacked, and past experience tells them they can weather the storm if they do. This kind of decision-making and priority-setting takes place at the executive level, of course, and doesn’t reflect the dedication and technical skill of product engineers and security specialists. The effect of this underinvestment is pernicious, however, by piling on risk that’s largely hidden from users. Widespread adoption of cloud computing carries that risk to an organization’s network, to its customers and users, and, in turn, to the wider internet.

This aggregation of cybersecurity risk creates a national security challenge. Policymakers can help address the challenge by setting clear expectations for the security of cloud services—and for making decisions and design trade-offs about that security transparent. The Biden administration, including newly nominated National Cyber Director Chris Inglis, should lead an interagency effort to work with cloud providers to review their threat models and evaluate the security architecture of their various offerings. This effort to require greater transparency from cloud providers and exert more scrutiny of their security engineering efforts should be accompanied by a push to modernize cybersecurity regulations for the cloud era.

The Federal Risk and Authorization Management Program (FedRAMP), which is the principal US government program for assessing the risk of cloud services and authorizing them for use by government agencies, would be a prime vehicle for these efforts. A recent executive order outlines several steps to make FedRAMP faster and more responsive. But the program is still focused largely on the security of individual services rather than the cloud vendors’ deeper architectural choices and threat models. Congressional action should reinforce and extend the executive order by adding new obligations for vendors to provide transparency about design trade-offs, threat models, and resulting risks. These changes could help transform FedRAMP into a more effective tool of security governance even as it becomes faster and more efficient.

Cloud providers have become important national infrastructure. Not since the heights of the mainframe era between the 1960s and early 1980s has the world witnessed computing systems of such complexity used by so many but designed and created by so few. The security of this infrastructure demands greater transparency and public accountability—if only to match the consequences of its failure.

This essay was written with Trey Herr, and previously appeared in Foreign Policy.

Posted on May 28, 2021 at 6:20 AM28 Comments

Comments

Petre Peter May 28, 2021 6:46 AM

If the government protects them, then the government will want to create legislation on certification, training etc.

tim May 28, 2021 8:37 AM

I have a lot of problems with this article. We need to call out the difference between IaaS providers (e.g. AWS, Azure, etc) and SaaS providers (e.g. basically everyone else – slack, servicenow, etc). AWS and Azure have great incentives to invest in security for their respective platforms and they do. And breaches that took place on AWS have generally misconfigurations of a service* not a vulnerability.

On the other hand – SaaS providers provide little or no information on information security practices. We have found providers that do not log security events, hard code credentials in code, etc. We had one provider give us code that gave us admin on their git repos. These are the real problem.

We must also call out that the vast majority of company’s overall risk reduces when they move into a cloud provider because – lets face it – most organizations can’t run a toaster let alone a complex data center environment.

  • Case in point: Capital One’s multiple failures are directly attributed to the misconfiguration of basic services. In wasn’t “complex”. In essence they failed Security Governance 101. It wasn’t an AWS issue.

Clive Robinson May 28, 2021 8:42 AM

@ Bruce, ALL,

Cloud computing is an important source of risk both because it has quickly supplanted traditional IT and because it concentrates ownership of design choices at a very small number of companies.

Real should have the word “avoidable” in front of “risk”.

The simple fact is in all to many places these days the view of many about the Cloud in it’s many aberrant forms is changing from “avoidable” to “required”.

Whilst this might have some truth for more esoteric uses, as we know many of those uses involve the collection and processing of data that involvrs PII or is in some other way toxic.

Further analysis shows that at best these uses are at best marginal and in reality based on a faux market.

Such faux markets are often “bubble markets” in that they inflate without substance and either deflate as quickly or just burst. The result is that what was marginal at best becomes a significant cost.

What we know of the Cloud suggests that when something is at best marginal and becomes a cost, it gets dumped quickly, and any data becomes effectively orphaned, thus no longer under responsible control.

Such loss of control means one of three things,

1, Divested as scrap.
2, Divested as “fire sale”.
3, Divested by receivers.

With no legal control over the method of divestment or what future acquires may do with it.

Thus the toxic data ceases to be constrained and just gets thrown into the general environment to rise up and critically effect those who’s PII etc is in that data.

There needs to be very strong legislation to stop this happening, not just because bubbles burst, but also “faux bankruptcy” is a very effective way of removing contractual obligations, and even regulations and legislative constraint.

Keith Douglas May 28, 2021 9:10 AM

There’s also that cloud providers can have different scope of security expectations than your traditional security groups. For example, Microsoft does not consider auditing and monitoring to be directly in the security space; yet NIST and (in our case) the Canadian counterparts put this sort of thing directly in the control catalogues, etc. This is especially true in the software development training; I find this fascinating in light of “devsecops” and agile ideas.

Another factor can be labled as “bandwagoning” – groups can adopt cloud technologies without (for example) necessary prerequisite software development maturity because it solves other problems in such maturity (like how to provision virtual machines). This is partially effective and initial successes will happen – but without sustainable, self-applicable practices, one hits walls. These are all made worse by artificial deadlines and other “bureaucratic” self inflicted wounds.

Yet another is that “because everything is code” barriers to entry for software development are a low; this is good and bad. I’m all for “software development for the people” but some of these tools (Power BI, for example) do not play nice this way and it encourages the (dangerous) attitude that software development = programming, when of course that’s a part-to-whole relation at best, not an identity.

Me May 28, 2021 9:35 AM

You’d think that a security blog would have some sort of protection from comments that escape their comment box and block the actual thread. Alas, this isn’t the world we live in, but it did illustrate the point at least somewhat.

Winter May 28, 2021 10:12 AM

It seems it is not difficult to filter out Zalgo text using regex. However, it would require messing with WordPress. That might be more of a problem.

ht tps://stackoverflow.com/questions/32020120/remove-special-characters-that-mess-with-formating

ht tps://stackoverflow.com/questions/32921751/how-to-prevent-zalgo-text-using-php

ht tps://stackoverflow.com/questions/11978912/how-to-protect-against-diacritics-such-as-zalgo-text

(URLs fractioned for your protection)

echo May 28, 2021 11:21 AM

I’m really fed up with seeing the same nonsense happen again and again for all the same reasons. As usual it’s pretty much known knowns all the way. The US is a big presence in the global market and the hands off attitude to “freedom of speech” and business regulation allows for innovation but also comes with a heavy price in terms of misinformation and dodgy products. Taking a large axe to the transatlantic pipe would solve a lot of problems.

I think there’s a case of too many people trying to be clever and empire building. Who needs the headache?

name.withheld.for.obvious.reasons May 28, 2021 12:08 PM

As mention earlier by @ Keith Douglas, there is a continuous stream of problem and solution mis-identification. When systemic design and development issues create a constant vector to unacceptable risk, maybe the model is broken. Much ink continues to be spilled on issues surrounding technological development, some of it here. Where the highway that we occupy continues to have a probability of failure at a nearly constant 1, the ability to address and solve these issues has the inverse continuum of a probability of zero.

With the lack of attention to the basics, there is no hope in the ever widening gap between the need to provide integrity versus the motivation and actions made of greed. Until a serious introspection by the players in these markets, we will be left to to look on and say WTF.

NoExternalBackups May 28, 2021 1:31 PM

@Clive,

“What we know of the Cloud suggests that when something is at best marginal and becomes a cost, it gets dumped quickly, and any data becomes effectively orphaned, thus no longer under responsible control.”

This is why I stopped backing up my personal files to the cloud, even though I encrypted the folder before the backup software supposedly encrypted before transfer and then kept the data encrypted at rest on their computers.

I realized at some point that no matter how much I trusted the company, the assets of that company might be transferred to some other company that was going to do with my data whatever they wanted.

Peter May 28, 2021 4:42 PM

In Australia cyber-security (for government entities) is managed by the Australian Signals Directorate (ASD) who publish a set of standards. They also provide an independent certification and training for security experts.
Any cloud provider who supplies services to the Aus Gov (including both federal and state) must be certified to the appropriate level. Many non-gov clients pay a lot of attention to these certifications, meaning that there is a significant incentive for providers to comply.

Also, data sovereignty is a significant point and it is mandated that all secure information must be stored on machines physically located in Australia. Some exceptions apply as long as the security controls are sufficient, but storing anything in the US is right out due to the known issues – especially intercept by US intelligence agencies.

Charlie Zaloom May 28, 2021 7:33 PM

Good article. Some notes on cloud adoption:

— The largest customers are still streaming media. FinSvcs is still skeptical. Most banks and oil companies are listed as ‘customers’, but most are not running backend core txp business in the cloud. It’s limited to customer-facing systems. They are getting closer, though.
— Moving to the cloud is not a technical decision. IT is not driving adoption, CFO are. Cloud svcs move long-term capital expenditure to operating expenses, written off quarterly. (CFOs hate capX, particularly from non-revenue generating centers.)
— There isn’t enough security talent available to run enterprise systems correctly today. (I don’t know how our lights are still on.)
— Many companies do not understand or handle the demarcation of liability/responsibility between themselves and cloud providers well at all. Their contract defines each party’s responsibilities, so it needs to be deeply considered by management and reflected in operations. Law looms larger in IT now.
— Small/medium companies (SME) don’t have the corp management, will, or talent to operate safely. They will likely operate better with industry cloud svcs.

But, you still have the aggregated risk. Do you trust AWS and the others to handle it?

My tech buddies (SABI/RTB types) say that container security on the cloud has gotten “pretty good”. It would be very good to have a TTP certify that.

AWS has quietly recognized that customers can hurt themselves misconfiguring their infrastructure and that is bad for business. They have been developing “pretty good” AI (e.g. zelkova, tiros) for customer tools to scan IAM policies and network services, etc. and they do seem pretty determined about it.

Which is all good stuff, but it still takes a genius to put a secure enterprise infrastructure together, coordinating and maintaining all the policies.

Clive Robinson May 28, 2021 9:41 PM

@ Charlie Zaloom,

All good points, and more or less what what you would expect with the way certain things are structuted (hence you CFO points).

One thing though,

Which is all good stuff, but it still takes a genius to put a secure enterprise infrastructure together, coordinating and maintaining all the policies.

I’m not sure “genius” is the right word or term.

Yes to most what such people do makes them look like the “Techno-mages”[1] of SiFi/fantasy. But the reality is a bit different they are more “Techno-Renaissance-Man”[2] bringing knowledge from many disparate domains to bare on the problem domain.

In my own work I find what I consider common sense based on my past experience, appears almost as magic or even mystic as though I can predict the future to those who do not have my background, experience and aquired “breadth” of knowledge, or are don’t have the time to think beyond some artificial deadline.

The humdrum reality is I’m “eternaly curious” and I like amoungst other things “industrial archaeology” and “industrial history”. To understand them you in effect need to “get inside their lives” of those that were it’s figures of note and also the “working stiffs” that made their achievements possible. And that way see what their drivers and motivations were, that predicated their actions both successful and not (studying disasters is a very worthwhile activity). You then discover, that though the technology has moved on in many many ways, the drivers and motivations realy do not change much. In effect when you strip off the technology you are left with “the human condition” and that only changes very slowly. The Conquistadors of half a millennium ago can still be seen with the modern follow on of the “corporate raider”. The drivers, motivators, methods, and outcomes have all been seen over and over and have barely changed.

Thus perhaps the two biggest failings of ICT are,

1, We do not learn from our history.
2, We have not developed objective measurands.

Without either we are progressing without purpose, rigour or efficiency. Thus are mired down in a mess of our own creation. Anyone who can show purpose, or foresight is going to appear to be “The one eyed man in the kingdom of the blind”, even though they feel blighted when they cast their gaze on the world of those with two eyes.

In short what you observe is the same from which ever direction you look, what makes the difference on how you perceive an actor as genius or dunce, depends on your own experience and aquired knowledge and what you see as common sense actions and probable outcomes.

As someone once observed,

“The true acts of genius are those that are new and original, but with even close hindsight appear absolutly obvious”.

But as Richard Feynman observed, based on the old Russian observation that the most frightening phrase in the Russian language is “That’s odd.”

“The sound of scientific discovery is not Eureka, but that’s odd.”

The Chinese like the Russian’s tend towards a fatalistic outlook in sayings, hence the curse of,

“May you live in interesting times.”

To me atleast, all times are interesting.

[1] The idea of Techno-mages arises from Arthur C. Clarks observation that,

“Any sufficiently advanced technology is indistinguishable from magic.”

You then try and imagine what that would be like if a sufficiently advanced group turned up on your door step. That is they do not have to be geniuses, just more advanced in knowledge and it’s practical application. In some ways we do not have to imagine, but just look at history. The expansion out of Europe into the rest of the world based almost entierly on “greed” shows the bad side. Not just to the cultures that were invaded, but later the effect that “easy riches” had. For instance Spain, went from being a major power, to dilettantism, and effectively ceased further forward progress and in essence collapsed into first mysticism then fascism and civil conflict, which it still suffers from today (ie Catalan issues). It’s why although Arthur C. Clarke did not say it most authors see Techno-mages as having a strong “humanism” element to their make up and in effect see them as the logical successors to Renaissance Man as typified by Leon Battista Alberti and Leonardo De Vinci, and later others such as Sir Christopher Wren.

[2] https://www.britannica.com/topic/Renaissance-man

ResearcherZero May 28, 2021 9:59 PM

Or how about judges, lawyers and prosecutors? Detectives, police officers? It’s all available. Everything they have gotten up to, and some of it could be considered quite disturbing. But then again, in relation to the function of law itself and the courts, you could say it’s hardly disturbing at all, unless of course you are the unaware and generally naive public, in which case you may be very, very shocked (if you can still be shocked that is).

Another problem for the youth to sort out, as it’s not like the federal police are doing anything about it?

ResearcherZero May 28, 2021 10:25 PM

But really, what is the worst that could happen?

the attack was still in progress and that the hackers were continuing to send spearphishing emails, with increasing speed and scope…
hxxps://www.nytimes.com/2021/05/28/us/politics/russia-hack-usaid.html

The email was implanted with code that would give the hackers unlimited access to the computer systems of the recipients, from ‘stealing data to infecting other computers on a network,
hxxps://blogs.microsoft.com/on-the-issues/2021/05/27/nobelium-cyberattack-nativezone-solarwinds/

A sophisticated cyber threat actor leveraged a compromised end-user account from Constant Contact—a legitimate email marketing software company—to spoof a U.S. government organization and distribute links to malicious URLs.
hxxps://us-cert.cisa.gov/ncas/current-activity/2021/05/28/joint-cisa-fbi-cybersecurity-advisory-sophisticated-spearphishing

The benefit with the Australian system is that you don’t know it’s happening and you find out years later from somebody else. It’s simple to deal with, you just don’t say much at all to the public and they are generally none the wiser (not that they would understand all that techno babble anyway. Of course that’s if anyone detects it in the first place.

Winter May 29, 2021 4:47 AM

Back to the original problem: Cloud security.

IMHO, any clients moving to on premises servers will become more vulnerable to security breaches and outages. There is no way that any SMB or moderately big companies can attract the required sysadmins and security personnel.

The whole move towards cloud was caused by companies being unable to even apply the required updates to the software they needed.

Freezing_in_Brazil May 29, 2021 6:41 AM

@ Clive, ALL

Re Risk

This is an interesting point. I notice in my daily business that people have a distorted view of what risk represents. For the overwhelming majority, risk is some likelihood that something bad will happen to your digital assets. Sometimes it is difficult to explain that risk is the probability that something bad happens, times its impact (R = PI). This means that risk is a magnitude, which has only two values Acceptable or not acceptable.

For me, the risk of the cloud is always unacceptable

Ignorant US redneck May 29, 2021 11:30 AM

Call me a luddite.

The Cloud is, by definition, distributed data storage and processing. Which causes me to ask several questions:
1. Where is my data stored?
2. Who, besides me, is allowed access?
3. Where are the CVs and security checks of the sysadmin on shift?

So, the CFO controls the purse strings and stamp a big NO on inhouse purchased IT equipment.

So, there aren’t enough qualified sysadmin types available, regardless of salaries offered.

So, the big cloud providers off-load the risk and penalties onto the customer.

Then, failure in not just an option. It is guaranteed.

Now, you can throw your hands up and wait for the inevitable doom (assuming you can’t off load to your customers) or you can regress to 12 column ledgers and 10 key calculators with data driven at the speed of a dead snail.

Hobsons choice or dilemma. You decide.

Clive Robinson May 29, 2021 12:27 PM

@ ,

Then, failure in not just an option. It is guaranteed.

Correct.

Now roll yourcarguments backwards and what do you find?

A small handful of Silicon Valley Software houses pushing out realy shoddy software back last century.

What have they done to “stop the shoddy”? Well they would say a lot, but the reality is they have done way way less on sorting out the shoddy, and boatloads more on bloat, marketing features, and unwarented complexity.

Also they absolutly rely on the Internet to ship out “patches” like machine gun bullets to mask “the shoddy” not fix it.

That’s why you can not get SysAdmin’s upto the job, because each one trying to learn sufficient to keep things safe, is effectively out numbered by “makers of shoddy” not ten to one, not one hundred to one but thousands to one.

Look at it this way, every single developer in those Silicon Valley companies is “making shoddy” under “marketing instructions via managment”. The average company will have more than ten applications from these “merchants of crud” that have legacy code that is atlreast thirty years old.

As they say “the math ain’t difficult” no matter which way you count it an individual SysAdmin will never ever learn sufficcient to mitigate “the shoddy”.

Thus the start of your argument originated with people in their late fifties or older, and the technical debt has just built up, and never gets serviced, except when it is to embarrassing to ignore.

There are laws about merchantability and fitness of purpose even in the US, but if you try gettong them applied you will meet resistance like you have never felt before. Right from the get go these Silicon Valley companies claimed they were not “selling goods” but “leasing services” so they could argue those laws did not apply, and they had no liability for fault etc.

Look at what Bill Gates family did for a living,

https://heavy.com/news/2019/09/bill-gates-parents-mary-maxwell-sr/

And how MicroSoft got going…

In essence, much of what ails the Software industry can be traced back to those days, and what they were alowed to get away with…

Ignorant US redneck May 29, 2021 12:43 PM

No arguments with your history, Clive. Failure was assured even if none were aware of it.

Hobsons Choice = You can have the mushrooms behind the outhouse or none at all.

Dilemma = You can have the mushrooms on the left of the outhouse or the ones on the right of the outhouse.

Clive Robinson May 29, 2021 5:49 PM

@ Freezing_in_Brazil,

First of, how are you doing?

Sometimes it is difficult to explain that risk is the probability that something bad happens, times its impact (R = PI).

I have a hard enough time explaining to people what “probability” actually means in reality…

The usual one of the likelyhood of your house catching fire and how statisticaly you can set insurance rates etc. You find few people ever get to accept it especially politicians. Most reason they are carefull whilst others are accidents that happen… so they should not have to pau for other people. Politicians actually argue that cutting first responders will actually make us safer because people will take more responsability…

Yup they have cognative biases or agenders, and shifting their oppinions is often impossible because of them.

Mind you try explaining “random” in a usefull way, that realy is pushing a rock uphill with your hands tied behind your back…

Winter May 30, 2021 10:38 AM

@Clive
“Politicians actually argue that cutting first responders will actually make us safer because people will take more responsability…”

I assume these are the same people who argue reducing taxes increase tax revenues? And the same people who make sure the first responders at their house are first class?

But I have no bone of contention with politicians who tell voters what they want to hear. I despise people who pretend to believe this and vote for them. Because these people are very good at calculating that these budget cut will go to people much poorer than they.

Winter May 30, 2021 10:42 AM

@Clive
“Mind you try explaining “random” in a usefull way, that realy is pushing a rock uphill with your hands tied behind your back…”

My best explanation is betting money on the outcome. Random is when you cannot beat a monkey throwing darts.

Freezing_in_Brazil May 30, 2021 9:39 PM

@ Clive, All

I’m very fine, my friend. Thanks for the thoughts and your comment.

I hope you are well too as you recover. The second jab is there waiting for you.

My best regards. I bid everyone peace.

Cigaes May 31, 2021 5:18 AM

I think there is a fourth side to the misaligned incentives: cloud providers do not make it easy to move our data to a competitor. Most, in fact, make it harder on purpose, under the guise of a “better user experience”. This is the reason so few people move their data after large breaches.

Eva May 31, 2021 7:50 AM

The first step to minimizing risks in the cloud is to identify the key security threats in a timely manner. At the RSA conference in March this year, the CSA (Cloud Security Alliance) presented a list of 12 cloud security threats facing organizations.
I recommend looking for it on the Internet and familiarizing yourself with it in detail.

Water Industry Person May 31, 2021 5:11 PM

@ Freezing_in_Brazil, Clive, All

Sometimes it is difficult to explain that risk is the probability that something bad happens, times its impact (R = PI). This means that risk is a magnitude, which has only two values Acceptable or not acceptable.

I have a hard enough time explaining to people what “probability” actually means in reality…

My experience is also that probability and risk are poorly understood, and when understood, poorly estimated.

In a couple of my roles, my teams used Failure Mode & Effect Analysis (FMEA) pretty much as received from our internal consultants to calculate a Risk Priority Number (RPN) that goes further than R = PI. It took into account likelihood of occurrence (probability), severity (impact), but also detectability (along with ability to prevent or mitigate.)

That’s a less pure approach, but the idea (I think) was that it guided the process of working through minimizing your risks – do you accept a risk with X probability and Y impact if you can implement some prevention or mitigation (ie. some bad thing can get started but you can stop it before it becomes a problem), or do you find a way to eliminate it entirely? Iterating through these factors with finer and more focused mitigations could sometimes reduce the risk to an acceptable level, but on the other hand, you could end up going very deep into rabbit holes, so to prevent that someone always had to referee these discussions.

In FMEA analyses, we made our best guesses of all the types of risks we could think of, and best estimates of risk levels and mitigations, of individual risk factors.

Our FMEA analyses probably didn’t go far enough. I can think of two weaknesses, and there are probably others I never thought of and others here probably could:

  1. Our analyses covered “known knowns” and “known unknowns”, but didn’t even try to take a swing at how to react when “unknown unknowns” happen.
  2. We never looked at detectability or severity of interactions or co-occurrences among two or more risk items, that is, if two or more risk items occur in parallel or in sequence that individually may be minor but together magnify the impact of those risks manifold. As for co-occurrences, I’ve seen happen a couple of times two simultaneous failures in industrial equipment with symptoms either the same or so similar they seem identical at first. Solving one and finding that “the symptom” (but actually two co-occurring identical or similar symptoms) is still there can be maddening. If you have the temperament, mindset and knowledge to keep going and track the other cause down, you resolve the problem eventually. Most people don’t have that, and try to get them to understand the probability of such co-occurrences when it’s difficult to get them to understand the basic concept of probability!

Another thing FMEAs don’t address is unwillingness of some participants or management to acknowledge and deal with risks and the need to prepare mitigations when they don’t want to hear about them because of too high impact to schedule and/or budget…

My experience is in industrial equipment, the principles and lessons apply just about everywhere.

Freezing_in_Brazil June 1, 2021 9:31 AM

@ Water Industry Person

Great post thank you. RPN is amazing.

Allow me to link an overview for the casual reader:

hxxps://www.sciencedirect.com/topics/engineering/risk-priority-number

Regards

Obi June 2, 2021 11:39 AM

the result is that cloud providers provide variable levels of security, as documented by Dan Geer, the chief information security officer for In-Q-Tel, and Wade Baker, a professor at Virginia Tech’s College of Business, when they looked at the prevalence of severe security findings at the top 10 largest cloud providers.

Hold up! What they did was:
– Look up a list of IP ranges of public cloud providers
– Ran a scanner against them
– Created a graph

This is utterly disingenuous. Because some customer ran a WordPress with vulns, that means the cloud provider is at fault?

The application stack is clearly part of the customers responsibility. This is like claiming “AVIS is bad because this rental car cut me off in trafic, almost causing an accident”.

How many vulnerabilities did they find in the cloud’s services? API endpoints? Gateway code? LB TLS stack? Anything? Can we have a link to the CVE?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.