The Cost of Cybercrime

Really interesting paper calculating the worldwide cost of cybercrime:

Abstract: In 2012 we presented the first systematic study of the costs of cybercrime. In this paper, we report what has changed in the seven years since. The period has seen major platform evolution, with the mobile phone replacing the PC and laptop as the consumer terminal of choice, with Android replacing Windows, and with many services moving to the cloud. The use of social networks has become extremely widespread. The executive summary is that about half of all property crime, by volume and by value, is now online. We hypothesised in 2012 that this might be so; it is now established by multiple victimisation studies. Many cybercrime patterns appear to be fairly stable, but there are some interesting changes. Payment fraud, for example, has more than doubled in value but has fallen slightly as a proportion of payment value; the payment system has simply become bigger, and slightly more efficient. Several new cybercrimes are significant enough to mention, including business email compromise and crimes involving cryptocurrencies. The move to the cloud means that system misconfiguration may now be responsible for as many breaches as phishing. Some companies have suffered large losses as a side-effect of denial-of-service worms released by state actors, such as NotPetya; we have to take a view on whether they count as cybercrime. The infrastructure supporting cybercrime, such as botnets, continues to evolve, and specific crimes such as premium-rate phone scams have evolved some interesting variants. The over-all picture is the same as in 2012: traditional offences that are now technically ‘computercrimes’ such as tax and welfare fraud cost the typical citizen in the low hundreds of Euros/dollars a year; payment frauds and similar offences, where the modus operandi has been completely changed by computers, cost in the tens; while the new computer crimes cost in the tens of cents. Defending against the platforms used to support the latter two types of crime cost citizens in the tens of dollars. Our conclusions remain broadly the same as in 2012: it would be economically rational to spend less in anticipation of cybercrime (on antivirus, firewalls, etc.) and more on response. We are particularly bad at prosecuting criminals who operate infrastructure that other wrongdoers exploit. Given the growing realisation among policymakers that crime hasn’t been falling over the past decade, merely moving online, we might reasonably hope for better funded and coordinated law-enforcement action.

Richard Clayton gave a presentation on this yesterday at WEIS. His final slide contained a summary.

  • Payment fraud is up, but credit card sales are up even more—so we’re winning.
  • Cryptocurrencies are enabling new scams, but the big money is still being lost in more traditional investment fraud.
  • Telcom fraud is down, basically because Skype is free.
  • Anti-virus fraud has almost disappeared, but tech support scams are growing very rapidly.
  • The big money is still in tax fraud, welfare fraud, VAT fraud, and so on.
  • We spend more money on cyber defense than we do on the actual losses.
  • Criminals largely act with impunity. They don’t believe they will get caught, and mostly that’s correct.

Bottom line: the technology has changed a lot since 2012, but the economic considerations remain unchanged.

Posted on June 4, 2019 at 6:06 AM18 Comments

Comments

Kyle June 4, 2019 8:26 AM

What exactly qualifies as an effective “response”? How is he proposing to invest in responding after the damage, so to speak, has been done? Doesn’t it still make more sense to invest in prevention?

wiredog June 4, 2019 9:03 AM

“We spend more money on cyber defense than we do on the actual losses.”
Well, yes, but how much would we lose if we had no defenses? The most expensive army is the one that’s second best.

Mike Acker June 4, 2019 9:20 AM

what is the human cost of insecure software? this should include anxiety as well as the countless hours and dollars wasted in defensive efforts that end up ineffective? what of lost opportunities and growth that result from customers avoiding the use of technology?

what is the real cost of insecure software ?

but the larger question is: do we really want to live like this ?

albert June 4, 2019 11:21 AM

And there’s a 30% chance of rain today.

“…calculating the worldwide cost of cybercrime…”

I stopped right there. No sense reading a 32-page nothing burger.

. .. . .. — ….

metaschima June 4, 2019 4:53 PM

Interesting paper. I think it’s difficult to apply this research to a particular security application due to its broad study population. I would agree that it’s easy to overspend on expensive security solutions that do exactly the same thing as cheaper options. I think the most important thing is proper design, implementation, and coordination of security efforts. It sure would be great to be able to catch more cybercriminals and punish them but there’s a lot of burocracy in the way. You think you can extradite people from just anywhere?

Frank Wilhoit June 4, 2019 5:44 PM

I think it is becoming more difficult and less relevant to distinguish between security-related software defects and other software defects.

From the standpoint of the end-user, all software defects pose unpredictable and unquantifiable risks. The responsibility to manage those risks is not matched by any corresponding capability.

I have worked with numerous small-to-medium-sized businesses whose competitive positions were crippled by the unforeseen costs (due to defects) of adopting one particular consumer-grade software platform. Or of being unable/unwilling to adopt that platform: at a paper-manufacturing plant, if a web of paper breaks, the line must be stopped within 100 ms or people will die. Toy platform couldn’t do that; Company X refused to adopt it; Company X lost a lot of customers, albeit none of them by being cut in half by a flying web of paper.

The analogy does not fail by intentionality: malware can be regarded as software that is adopted unawares, just as software that is adopted unawares can be regarded as malware. Example: many-to-most of the people who adopted (within a certain time frame) a certain top-shelf database package were not specifically aware that they were also adopting that package’s driver software for a certain language runtime environment, which was so defective as to be beyond use, and which remained in that state for years. At one very large comany, that driver caused operational disruptions that resulted in million-dollar regulatory fines. Was it malware?

Neither is the specific risk of theft, as opposed to other kinds of operational failures, a differentiator. An accounts-receivable application, running on the consumer-grade platform mentioned above, had a defect that allowed customers of the end-user to steal money from the end-user by telephoning the end-user’s accounts-receivable department and posing crafted queries about their invoices. Viewing the invoices altered their data content by recursively applying discounts that should have been applied at most once. Was this a security defect? Its exploitation certainly involved human engineering.

The parallel with the recent incidents in the airline industry is also obvious.

The point is that nearly all software — consumer-grade, “enterprise”, embedded, industrial automation — is defective to the point of being outright unfit for purpose, regardless of what that purpose is, and regardless of whether or not its end-users know exactly which software they are running or why.

Focussing on “security” risks in isolation from all of the other classifications of risk (none of which are “manageable” in any conventional understanding of the word) is simply a failure of situational awareness.

Clive Robinson June 4, 2019 9:43 PM

@ Mike Acker,

but the larger question is: do we really want to live like this ?

Which I would have thoughtvwas self evident. Thus the actual question is,

    How the hell do we get off of this death spiral whilst we are still alive?

Clive Robinson June 4, 2019 11:33 PM

@ Bruce, All,

The point we realy all should dwell on is,

    We spend more money on cyber defense than we do on the actual losses.

There are a couple of reasons for this.

The first is the old “Defence Appropriations dilemma” of,

    You only know you spent to little or unwisely because you got attacked

That is you actually never know when you are spending to much or wisely. Because after a point you have spent sufficient to deter any attackers that might give you the once over, but you don’t know that so you carry on spending untill you think you have spent enough.

Which brings us to the second reason. Because you have no way to measure the effectiveness of your spend other than by being attacked you become prey to what is in effect another form of fraud, which is the more refined form of,

    Anti-Virus / Tech-sup scam.

That is much “defense spending” is made unwisely on what is in effect “Snake Oil with a dashboard and faux metric report generator.”.

They product vendors get away with it for a couple of reasons,

1, We have no measurands of use.
2, We are still a target rich environment.

I’ve mentioned these both in the past and we still don’t do anything about the important one which is “no measurands”… Because without them we fall back on the notion of “Best Practice” which is basically,

    Last year our industry survey showed the twenty least attacked organisations did the following ten things in common.

So those ten things are this years best practice recommendation…

Am I the only person who sees the problems with this Best Practice idea?

Well there’s the obvious “the survey is self reporting” thus has cognative bias built in right from the get go.

But also you can be successfully attacked and not know it. That was the point about Advanced Persistent Threat (APT) attacks, they get in but you don’t see any sign that they are there. Or as a friend once chearfully put it “Even the fastest bullet is not instantly fatal, it always takes time to die even if you don’t know it” in response to a conversation about “talking heads” lifted from the basket of the Guillotine.

But there is also “probability” to consider. That is ICT is a target rich environment, so much so that comparatively very few ICT systems are actually attacked. Which means even more or less compleatly useless security measures will “appear to stop” attacks. That is those processing the survey can work on “magic umbrella” thinking. It works like this, I live in a near desert where it hardly rains, so I have this umbrella just in case it does rain. But I frequently forget to take it with me, but I’ve never seen it rain when I’ve had it with me so it must be magical… In other words it’s “observed correlation” not “proved causation”. The lower the probability of rain and the lower the probability you take the umbrella then the way way lower is the probability that it will rain on the rare day you do remember to take the umbrella… So the fact you’ve not seen any sign of successful attack in no way means the security measures you put in place actually stop attacks. So “taking two weeks vacation in the Bahamas” could be the number one “best practice” measure if you ask about holidays in the survey.

I could go on but underneath all of this is the lack of measurands. Without which you can not apply the scientific method…

Petre Peter June 5, 2019 6:34 AM

I have the feeling that, in the future, the cost of cybercrime will be set by insurance companies

Clive Robinson June 5, 2019 6:50 AM

@ Frank Wilhoit,

Focussing on “security” risks in isolation from all of the other classifications of risk (none of which are “manageable” in any conventional understanding of the word) is simply a failure of situational awareness.

I’ve argued for a long time that security in the design of ICT is the same or part of a “Quality” process.

Also arguably in software the complexity is high and the development cycles ridiculously short.

People have talked in awe at the several milion physical parts in a modern jet liner, but software that is used day after day many times that in real instructions usually does not produce any such awe, more usually disdain for it’s usability.

There is I guess a degree of “out of sight out of mind” not just in users but developers and their managment and “first to market” in rapid “release then patch” cycles does not help issues. Because each release must contain new “features” thus new “errors” with some later “fixes” introducing further errors and so on.

When I was developing hardware and software for embeded systems which I guess you could say were the forerunners of todays IoT and mobile technology there was “no patch tuseday” and you had to be right in both hardware and software before it went out the door. The result was both hardware and software had similar development and test times.

In general I would have a prototype design for both hardware and software up within a couple of weeks and “breadbording” would have started usually within that time. That is a Chip development system would be on the bench and a skeleton circuit on “Perfboard” or “stripboard” up and running with the “user interface” on the left and “hardware interface” on the right of the Microcontroler Chip perfboard.

A second chip development system would be up and running within a week of that for “hardware development” to use. This would have a microcontroller development PCB board with user interface hardware built on and “fitted” for the case design and would usually be “finalised” in terms of the hardware interface. Thus as other custom parts arrived they would be fitted and tested electrically, test software loaded and their software interface checked and that would then get turned over to the hardware people for electronic and functional testing.

At this stage there was usually an up and running BIOS in software with the bottom side handlers and fast interupt handlers and a rapidly developing OS on top giving the top side and slow handlers for buffering etc to give the application software development a standardised abstract interface.

Then “software development proper” could get going whilst the hardware people went through various stages. A provisional software release for regulatory testing and compliance was produced and for the customers engineers and marketing people to play with. This would be “fleshed out” and “intensively tested” whilst compliance testing etc was going on. By which time things like custom parts (LCD displays, keyboard pads cases etc) would have arived thus enabling a half dozen or so units to be constructed and sent out for “user evaluation” testing and “feature tuning” and the scheaduling for “line production” started such that on the production sign off signiture from the customer custom part manufacturing would start and the finalised software sent off to the microcontroler manufacturer for “mask production” and chip production. At which point the person who had drawn the short straw would end up on a long haul to one of the factories in various F.E. countries to “get the line up and unsnaged”. On some occasions their return “hand luggage” would fill one of those aircraft hold containers as they flew back with the first units of the line so the customer could do “product launch”.

The point being the software development time was actually very short and you could type up the various “stub prototypes” in a day or two then they would undergo a week or two of mangaling whilst the software spec developed from the customer “wish list” and “this might be nice” upgrades. The flesh would be hung on those bones fairly quickly as almost independent modules, then it was comparatively a very long test&fix cycle to grind every bug that could be found into the dirt. Typically test/fix was 6-15 times the development time of first candidate software. That might sound strange to some but unlike PC Apps that have maybe five to ten hours of uptime with a user, or server apps with 4,000hour uptimes we knew some of our stuff would routienly have uptimes exceading the overall hardware MTTF of 150,000 hours. As I’ve mentioned befor I’m still supporting some ICS stuff from the late 80’s / early 90’s,that’s got atleast 220,000hours uptime on the clock.

That said there were some funny and fun times, one was when one airline for promotion purposes did a special on First Class flights having an unlimited baggage alowance. Thus some engineers flew “First Class” for a while because it actually cost less… Usually you got “economy” and a big baggage bill. Economy on some Assian Airlines of the time was the modern equivalent of the “Little ease” tourture to somebody my size. But even that could have it’s funny side. Back then I used to have “travel battle dress” that I’d developed whilst working in the off shore oil industry, heavy steel toecap dock/rigger boots or equivalent motorcycle boots, dark brown denim jeans and a long warm padded black leather coat which you could sleep in and almot use as a tent, which did not crease and stayed dry even in a monsoon torrent. Whilst not quite looking like a Hell’s Angel in a Giant’s Revenge Movie, I could have passed as an extra for a Startrek Klingon size wise. On one occasion due to a codsup I got pulled in boarding to sort out an issue because of a message that the cargo handlers passed up about insufficient hold space… Which ment I ended up having to run full tilt down the embarcation ramp, thus there was this massive boom boom as of doom as my motorcycle boots crashed into the floor and my 6ft 6 tall 250 pound broad bone and muscle rugby playing frame with my long leather coat tails streaming behind me sprinted at close to twelve miles an hour into view arms and legs pumping like some infernal machine of gothic nightmares. I must have scared the living soul out of the petit Japanese boarding hostess because all she could do was just with a fixed rictus of frozen greating and wide open staring unblinking and unmoving eyes point me into First Class rather than economy, so best free upgrade I ever got on an airplane 😉 Oh and she did warm to me after I appologised for having been late and inadvertantly scaring her half to death. I think she quickly realised I was more of a teddybear than a grizzly.

Wilhelm Tell June 5, 2019 11:57 AM

We spend more money on cyber defense
than we do on the actual losses.

Another question is what is counted as loss. When considered the whole economic system, and that includes both the vicious and the virtuous parties (however they all happen to be humans alike), most of the crimes only move money or wealth from one pocket to another and no actual loss happens.

As an example of a huge criminal fraud I take the production method of silk. Chinese kept it a strict secret for 2000 years until a fraudster stole the “patent”.

For China he loss was enormous but in the global context the world was the winner: Within a couple of hundreds of years the silk production was up and the price down so that even the poorest citizens of the world could afford some kind of piece of silk for their pleasure.

Nick June 5, 2019 2:00 PM

“… denial-of-service worms released by state actors, such as NotPetya; we have to take a view on whether they count as cybercrime.”

Why wouldn’t they?

The most serious crimes are ones committed by governments. And by the way, don’t confuse “crime” with “what is prohibited by law”. They are correlated, but the correlation coefficient has been falling of late.

Jeremy June 6, 2019 6:09 AM

That rustling sound is a few dozen pencil sketches being thumbtacked to a few dozen Clive walls in various corners of the globe.

A Nonny Bunny August 9, 2019 2:53 PM

@Clive Robinson

That is you actually never know when you are spending to much or wisely. Because after a point you have spent sufficient to deter any attackers that might give you the once over, but you don’t know that so you carry on spending untill you think you have spent enough.

Or..
You could collect data on all your compatriots, some of who got attacked, some who didn’t, and figure out how their level of protection factors in.

And if the cost of defense is greater than the losses of an attack, get insurance instead.

Clive Robinson August 9, 2019 4:03 PM

@ A Nonny Bunny,

You could collect data on all your compatriots, some of who got attacked, some who didn’t, and figure out how their level of protection factors in.

That would be nice, but it does not work…

Firstly as I noted with,

    Because you have no way to measure the effectiveness of your spend other than by being attacked you become prey to what is in effect another form of fraud,

The reason you can not “measure the effectiveness” is a sad and sorry tale of the primary failing of the ICTsec industry, but as I said

    I’ve mentioned these both in the past and we still don’t do anything about the important one which is “no measurands”… Because without them we fall back on the notion of “Best Practice” which is basically,

What you have described…

The ICTsec industry needs proper measurands by which scientific methods can be brought to bare, and no I don’t mean “AV software dashboard readouts” they are at best a joke.

The design of measurands that are of use is a thorny subject for various reasons. The biggest however is neither Vendors of ICTsec systems or those that use those systems in their work olace to generate managment reports actually want scientific enquiry into their activities (for the good old reason it’s in neithers financial or career progression intetests).

As for insurance companies, they are the ones most likely to force proper measurands so their actuarial bods can make realistic assesments.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.