SBN

Human vs. Artificial Intelligence in Autonomous Systems

A common goal, as we see in many articles on AI (artificial intelligence) and ML (machine learning), is to make AI/ML systems more like humans. Some claim that humans are much better at driving road vehicles than self-driving software, even though the accident statistics appear to contradict this view. Perhaps we have it backwards! Maybe the goal should be to make humans as smart as machines. Two recent articles in The New York Times suggest that the latter approach could possibly be better.

On two successive days, September 17 and 18, 2021, The New York Times published articles that would appear to question the validity of the superiority of human over artificial intelligence with respect to weapons systems. The first article, “Pentagon acknowledges Aug. 29 drone strike in Afghanistan was a tragic mistake that killed 10 civilians,” by Eric Schmitt and Helene Cooper, describes how a series of human errors led to the mistaken firing of a missile from a drone at a target thought to be a vehicle, driven by ISIS-K terrorists, containing explosives intended for the Hamid Karzai International Airport in Kabul. In fact, the vehicle was driven by someone supportive of the U.S. and seven children were among the ten casualties.

The second article, “The scientist and the A.I.-assisted, remote-control killing machine,” by Ronen Bergman and Farnaz Fassihi, describes how Israeli agents used a remote-controlled machine gun, likely with facial-recognition software, to kill a senior Iranian nuclear scientist, while incredibly sparing his wife who was sitting right next to him in his car, which was strafed with bullets.

The articles are available at Pentagon Acknowledges Aug. 29 Drone Strike in Afghanistan Was Tragic Mistake – The New York Times (nytimes.com) and The Scientist and the A.I.-Assisted, Remote-Control Killing Machine – The New York Times (nytimes.com) respectively.

Machines lack human understanding, perception, empathy and motivation, even when they might give the appearance of doing so. But such sensitivities belong to the creators of the machines, and are not inherent in the machines themselves. Machines’ actions do however result from specific motives or intent to the extent that they do what is expected of them, unless they exhibit anomalous or unpredicted behavior as is more feasible in the case of autonomous machines.

On the other hand, autonomous machines do not exhibit many human characteristics, such as getting bored, being distracted, requiring breaks, and dozing off, which impair performance.

In order to pass the so-called Turing Test or imitation game from Alan Turing, a machine’s responsive behavior must be indistinguishable from that of a human. But, under the covers, machines’ mechanisms merely emulate human emotions and motivations—they do not actually feel them. The latter human characteristic may indeed be unattainable. The question is whether or not we should try to achieve such a capability or just acknowledge that there is a difference and that such a difference is acceptable—or even welcome. Why should we “bark up the wrong tree” with respect to intelligent and autonomous systems?

Some believe that autonomy for computer systems is not achievable and that we shouldn’t waste our time trying to attain it—see Mark Stone, “AI security: How human bias limits artificial intelligence,” SecurityIntelligence, April 15, 2021, which is available at AI Security: How Human Bias Limits Artificial Intelligence (securityintelligence.com)  Others relentlessly pursue the quest for “singularity”—when an AI system becomes smarter than humans—regardless of its feasibility or danger.

Perhaps the true test is to determine whether human responsive behavior is indistinguishable from that of AI machines. Of course, the problem, as stated above, is that AI systems are programmed by humans, who purposely or inadvertently insert their own biases into the code and use data for ML that are also biased. So, what is the answer? To have machines program humans, of course! But, talk about dystopia …

*** This is a Security Bloggers Network syndicated blog from BlogInfoSec.com authored by C. Warren Axelrod. Read the original post at: https://www.bloginfosec.com/2021/10/04/human-vs-artificial-intelligence-in-autonomous-systems/?utm_source=rss&utm_medium=rss&utm_campaign=human-vs-artificial-intelligence-in-autonomous-systems