The Future of Machine Learning and Cybersecurity
The Center for Security and Emerging Technology has a new report: “Machine Learning and Cybersecurity: Hype and Reality.” Here’s the bottom line:
The report offers four conclusions:
- Machine learning can help defenders more accurately detect and triage potential attacks. However, in many cases these technologies are elaborations on long-standing methods—not fundamentally new approaches—that bring new attack surfaces of their own.
- A wide range of specific tasks could be fully or partially automated with the use of machine learning, including some forms of vulnerability discovery, deception, and attack disruption. But many of the most transformative of these possibilities still require significant machine learning breakthroughs.
- Overall, we anticipate that machine learning will provide incremental advances to cyber defenders, but it is unlikely to fundamentally transform the industry barring additional breakthroughs. Some of the most transformative impacts may come from making previously un- or under-utilized defensive strategies available to more organizations.
- Although machine learning will be neither predominantly offense-biased nor defense-biased, it may subtly alter the threat landscape by making certain types of strategies more appealing to attackers or defenders.
Clive Robinson • June 21, 2021 7:39 AM
@ Bruce, ALL,
The four conclusions are almost the same as have been given for computers since the 1960’s
Indicating that perhaps on “Machine Learning” there is nothing new above ordinary software.
Or to put it another Way,
“Man thinks, Man code, code runs faster, code runs finer, but code does not do anything not thought up by man.”
However there should be another conclusion,
“Speed kills”
If man offloads skills to computers, then yes those determanistic skills can be done faster, more effectively, and more efficiently by a computer.
But what of non determanistic skills?
It’s easy to see how, learning is killed, inovation is killed, and thus is progress starved and killed.
We half hartedly joke about “eveloutionary cul-de-sacs” and saber tooth togers. But in all such things there is a germ of truth.
Is over reliance on machine learning going to be mankinds evolutionary cul-de-sac?
We are already seeing issues with machine learning and the justice system. Where the non transparancy of so called neural networks and similar that are little more than glorified statistics packages are being used by authoritarian guard labour to evad responsability for their desired actions by hiding behind “The Computer Says” excuse.
We know the GIGO principle applies by the trash trailer full with machine learning. By running “hidden tests” criteria can be selected to give desired outcomes. That is the criteria though seeming random will give rise to a “training data set” that poisons or predisposes the ML system and causes it to adopt certain desired characteristics that are effectively automated “isms”.
We further know that the likes of Peter Thiel of Palantir are pushing machine learning systems into law enforcment. The hidden aim of which is “dependency thus profit” the same as drug dealers do, they sell you junk cheap, and you become dependent then thay jack the price.
In the Palantir model the aim is to get systems in, and get detectives phased out, thus the money that was spent on detectives goes to Geoff Thiel and Co, not on bringing detectives forward and new ones to follow them.
So the Machine Learning, which is incapable of learning and thinking thus responding in an intelligent way to changes in criminal activities, ceases to move with the criminal threat, but by the time that is realised the continuity of human detectives is broken, thus much of the most important skills are lost for quite some time if not for good. Thus irreparable harm is done for short term gain. It might be a politicians dream, but it will be societies near endless nightmare.