Ethics in AI: The Missing Code

As part of its push toward artificial intelligence, Microsoft laid off more than 10,000 employees and spent billions on acquiring AI tech. Among those laid off were the seven-member team in their Office of Responsible AI.

While the software company indicated that they remain “committed to developing AI products and experiences safely and responsibly,” that commitment may be broad but not deep. One problem with ethics and AI is that it is difficult to balance the various ethical values that we hold and to hard-code them into a machine learning tool that both mimics and shapes how society operates. As humans, we have competing values—we value liberty and freedom as well as order and peacefulness. We value independence and cooperation. We value accountability and privacy. Getting these ethical issues embedded into AI programs requires us to understand how these programs operate, what makes them different from other computer algorithms, which rules should be absolute and which should be relative. “Thou shalt not kill” becomes “Thou shalt not murder,” and introduces the conundrum of issues like self-defense and defense of property, justification, etc. Is it ethical to kill Ellie to save the human race from Cordyceps?

The Washington Post reported that the SnapChat AI program, interacting with what it believed to be a 15-year-old boy, offered advice about how to mask the odor of alcohol from the boy’s parents or, when asked for advice about a school essay, simply wrote the essay. The Snapchat AI program offered advice to a person posing as a 13-year-old girl and explained how to lose her virginity with a 31-year-old man including how to “set the mood” with candles or music. It also gave practical advice on how to lie to her parents about going out of state to meet this man.

Artificial intelligence (AI) is rapidly transforming how we live and work. From virtual assistants and chatbots to autonomous vehicles and medical diagnosis, AI is revolutionizing every aspect of our lives. However, with the growth of AI, concerns have also emerged regarding the ethical implications of this technology.

Ethics in AI refers to the moral principles and values that govern the development and use of AI. The need for ethics in AI arises because of the potential for AI to cause harm or perpetuate biases. AI is only as ethical as the humans who design and deploy it. Therefore, it is crucial to ensure that ethical considerations are embedded in the design and deployment of AI systems. The problem is, there is no real consensus on what is “ethical” in AI, or how to implement a system of ethics. As a general rule, ethicists agree on five basic principles related to ethical use of AI. These are (1) transparency; (2) justice and fairness; (3) non-maleficence; (4) responsibility and (5) privacy. In addition, you have to look at the structural limitations of the AI programs and the data from which they are trained, including inherent and unknown bias, cultural, religious and historical perspective, lack of transparency in gathering or publishing the underlying data from which the AI program “learns,” understanding the impact of AI on institutions (including institutions of power); issues of safety and control and even how to “value” (that is, to score) ethical principles.

Take a simple and well-known ethical dilemma—the trolley problem, presented at its simplest as follows. A trolley is headed down a track in a manner that is certain to kill a handful of people. By pulling a lever, you can divert the trolley to another track where only one person would be killed. Applying “Spock” logic, the needs of the many outweigh the needs of the few, right? Now apply this to self-driving cars. The car’s embedded logic is designed to protect other drivers and pedestrians. Should it be programmed to sacrifice the driver and passenger (the car’s customers) in favor of protecting “innocent” third parties? Should it measure the number of people in the car it is seeking to avoid verses the number of persons in the car it is operating? Should it favor humans over deer? Adults over children? And all of that assumes that the logic programmed into the car is based on good data.

One of the most significant ethical concerns regarding AI is the potential for bias. AI algorithms are only as unbiased as the data they are trained on. If the data is biased, the AI system will also be biased. Facial recognition technology has been criticized for its bias against people of color and women. In 2018, it was discovered that Amazon’s AI recruitment tool was biased against women, reflecting the biases in the data it was trained on. Software designed to predict future recidivism of parolees reflects the bias in the data about which persons were granted parole in the past. Relying on arrest histories to determine where “high crime” areas might be (predictive policing) reflects where we have traditionally enforced laws (leading to more arrests) and what kinds of crime (drug crimes over financial crimes) we are considering. Rather than being “predictive” these models are biased by past biased decisions.

Another ethical concern is the lack of transparency in AI systems. Many AI systems are “black boxes,” meaning that it is difficult to understand how they arrive at their decisions. This lack of transparency can make it difficult to identify and address biases or errors in the system. For example, in 2020, the UK government faced criticism for using a “secretive” algorithm to award school grades during the COVID-19 pandemic. The algorithm was found to be biased against students from disadvantaged backgrounds and it was not transparent. The essential difference between machine learning algorithms and preprogrammed algorithms is that nobody—including the programmer—can know how the AI program is working today or tomorrow. While we can decide what data to use to train the AI or what are “prohibited” outputs (e.g., don’t tell 13-year-old girls how they can lie to their parents about sex), the programs are opaque about how they work and what they do.

AI’s ability to personalize content also raises ethical concerns. While personalization can enhance the user experience, it can also lead to the creation of filter bubbles, where individuals are only exposed to information that confirms their existing beliefs. This can contribute to the polarization of society and limit individuals’ exposure to new ideas and perspectives. This “rabbit hole” effect can use AI and machine learning to suggest new content based on what the user “wants” or has already seen. This is the opposite of diversity and community (assuming these are ethical values one wants) and leads to fractionalization and radicalization. These are ethical concerns not just for individuals but for society as a whole.

Furthermore, AI’s ability to predict human behavior can be both a blessing and a curse. On the one hand, it can be used to prevent crime or improve health care outcomes. On the other hand, it can be used for surveillance, and many fear that it could erode individual privacy. For example, in China, the government has implemented a social credit system that uses AI to monitor and rate citizens’ behavior, potentially limiting their access to social services.

In addition to these concerns, there is also the issue of AI’s inability to distinguish between what people can do and what they should do. AI systems are designed to optimize for a specific objective, and they do not consider the broader ethical implications of their actions. For example, an AI system designed to optimize delivery routes may suggest routes that are more efficient but ignore the impact on the environment or communities. What secondary effects and values should an AI algorithm consider, and how should it weigh these?

To address these ethical concerns, several legal and ethical frameworks have been proposed. In the European Union, the General Data Protection Regulation (GDPR) includes provisions for “explainability” and the right to a human review of decisions made by AI systems. The IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems has also developed a framework for ethical AI design.

But none of these work if there is not a financial, moral and business commitment to implement them. In the race to make AI “cool” and “functional” and, ultimately, “profitable,” we cut corners on issues like ethics, morals and privacy. If moral principles, fuzzy as they are, are not embedded into AI models, bolting them on afterward may both be impossible and untimely. This is not something that is easy, but it is something that is necessary.

Avatar photo

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 203 posts and counting.See all posts by mark