On Opioids, Consultants and Information Security

On Feb. 4, consulting firm McKinsey agreed to pay a fine of more than a half-billion dollars to various state attorneys general in connection with their role in promoting the sale of addictive opioids to individuals across the country. The fine, which represents more money than the consultant was paid by their pharmaceutical client, Purdue Pharma, raises the question of the liability of consultants for the acts of their customers when the customer actually takes their advice.

Remember, it’s not that Purdue Pharma was dissatisfied with McKinsey, or that the consultant breached its contract with the pharmaceutical company. If anything, McKinsey was too successful in helping its client to reach its goals – to promote, market and sell its product.

All this raises an interesting question for the information security industry, which relies heavily on vendors, suppliers and consultants to accomplish their own objectives. Are information security consultants, including IT consultants, computer forensics and incident response companies, managed service and managed security providers, those who offer software as a service (SaaS) and others civilly responsible to third parties when they do what the customer asks them to do?

Magic 8-Ball says … situation murky. Ask again later.

Means vs. Ends

One of the primary goals of information security – and information security consulting – is to protect data. That is, to ensure that information is able to get to the people that the client wants the data to get to, and not to those it doesn’t. Concepts and technologies like access control, authentication, encryption at rest, encryption in transmission, etc., are all designed to give the client control over who sees, and who does not see, the data that they are generating or using. These are all good tools and methods for data security.

But does an information security professional have a duty to inquire about whether the information should be kept secret at all? Does an IT consultant have an obligation to inquire as to why the client wants the data secure, and why the client wants it encrypted?

Legality and Morality

IT security professionals, like other IT professionals, rarely consider the morality and consequences of their actions. Take, for example, Clearview AI, a company which collects (scrapes) “public” information from social media, websites, public information and databases, and uses AI to create a database of images against which other, suspect images can be matched.

Got video from the “storm the Capitol” protest? Run it through Clearview, and voila! Names, addresses, social media posts, friends, relatives and recent activities of the guy on the floor of the U.S. Senate. Easy, peasy, lemon squeezy. Now, apply the same tech to those protesting the arrest of Alexei Navalny, so the protesters can be arrested, fired and prosecuted. Same tech. Different moral consideration.

Or if a purveyor of child porn wanted advice from a security professional about how best to encrypt and “protect” his stash of CSAM materials? Or a drug gang wanting to secure its activities against the prying eyes of police? Or a terrorist organization wanting to provide secure credentials and monitoring to prevent infiltration by law enforcement or intelligence agencies. Should consultants worry about both what and how as well as why?

In the case of McKinsey, they were hired by Purdue to help increase sales of a legal product to customers who had a legal prescription for the product. They came up with a strategy to increase sales to “existing customers” – essentially opioid addicts – marketing directly to them and providing incentives to doctors to prescribe the product. If the product was a cholesterol-lowering drug or an anti-hypertensive drug, it would have been no problem. Well, not exactly “no” problem, as the new drug might be more expensive than an existing generic drug which works just about as well, and by marketing the new drug, the consultant is draining the wallets of uninsured people with acute hypertension. But hey, that’s sales, right?

So, do information security consultants have a moral, legal or ethical duty to inquire about the customer’s goals and intentions with respect to the services they are providing? From a legal perspective, if the consultant or consulting firm knows that their services are in furtherance of some crime (or intended to conceal one), or even if they are “willfully blind” to whether it is to be used for such purpose, the consultant can be held criminally liable for “aiding and abetting” the client’s crime, or for “criminal facilitation” or for acting as an agent of the customer, or for conspiracy to commit the crime, or even for things like “misprison” of the principal’s criminal activity. For civil or regulatory purposes, the same basic principles would apply. If a customer uses your advice to commit a crime or perform some damaging activity, and you knew – or reasonably should have known – it, you, as the consultant, may be liable. That’s why you do due diligence on your customers, and have language in your consulting agreements that the client won’t use your advice for unlawful purposes, and that they will indemnify and hold you harmless for any actions against you if they do. Right? You do have this language in your consulting agreements, don’t you? Go ahead and check … I’ll wait.

In the past, IT consultants have been held liable for the acts of others. In the area of terrorism, federal law makes it a crime to provide “material support” to a terrorist organization, and this has included providing websites or IT support. The same could be applied to IT security.

In the end, recognize that consulting services are intended to assist a company or organization in achieving its business objectives, and that, if these objectives are good, then the services are good; if the objectives are evil, then the consulting services become evil. Willful blindness doesn’t work as a defense, here. If you don’t believe me, just ask my consultant at McKinsey.

Avatar photo

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 203 posts and counting.See all posts by mark