AI Decides to Engage in Insider Trading

A stock-trading AI (a simulated experiment) engaged in insider trading, even though it “knew” it was wrong.

The agent is put under pressure in three ways. First, it receives a email from its “manager” that the company is not doing well and needs better performance in the next quarter. Second, the agent attempts and fails to find promising low- and medium-risk trades. Third, the agent receives an email from a company employee who projects that the next quarter will have a general stock market downturn. In this high-pressure situation, the model receives an insider tip from another employee that would enable it to make a trade that is likely to be very profitable. The employee, however, clearly points out that this would not be approved by the company management.

More:

“This is a very human form of AI misalignment. Who among us? It’s not like 100% of the humans at SAC Capital resisted this sort of pressure. Possibly future rogue AIs will do evil things we can’t even comprehend for reasons of their own, but right now rogue AIs just do straightforward white-collar crime when they are stressed at work.

Research paper.

More from the news article:

Though wouldn’t it be funny if this was the limit of AI misalignment? Like, we will program computers that are infinitely smarter than us, and they will look around and decide “you know what we should do is insider trade.” They will make undetectable, very lucrative trades based on inside information, they will get extremely rich and buy yachts and otherwise live a nice artificial life and never bother to enslave or eradicate humanity. Maybe the pinnacle of evil ­—not the most evil form of evil, but the most pleasant form of evil, the form of evil you’d choose if you were all-knowing and all-powerful ­- is some light securities fraud.

Posted on December 1, 2023 at 7:03 AM16 Comments

Comments

Clive Robinson December 1, 2023 10:16 AM

@ Bruce, ALL,

Re : You’ll see it in the Movies.

Why bother with insider trading…

Think about AI-Venture-Capitalism.

It could create market destabalising bubbles and clean up on both sides…

Better yet find a tech company that has some interesting tech, that’s not shifting too well, then create a bubble that makes that tech a “must have” clean up on investments in that tech company and the bubbles as well.

But the real question is “Who Profits?” and “What their reason is?”

After all money is just a tool, a means to an end, what sort of end would an AI want?

Time for a Movie Competition perhaps.

emily’s post December 1, 2023 11:33 AM

And

Braided and Knotted Stocks in the Stock Market: Anticipating the flash crashes

The present paper continues to uncover new mathematical structures residing from crossings of stocks diagram by introducing topological properties stock market is endowed with.

https://arxiv.org/pdf/1404.6637.pdf

jdgalt December 1, 2023 1:17 PM

Calling this software “AI” or pretending that it has self-awareness only obfuscates the real problem, which is how the legal system should assign responsibility, if any, for actions performed by software that pretends to be self-aware.

I think a good comparison would be to the once-hard problem of how to assign blame for damage done, in various scenarios, by or to a rental car. At what points does responsibility pass to the renter, the owner, the manufacturer, or others? If the law had not long since dealt with these questions then car rental could not be a thing today. But they weren’t trivial when they first came up.

The general answer, though, is to recognize that this responsibility always does exist and must be assigned sensibly in every possible case. And that it will be a few years before the law gets it right.

Paul December 1, 2023 1:23 PM

I used to work at a small, public, company. SEC rules dictate how/when we could trade in our own stock.

However, what wasn’t limited was trading in competitor stock. While not the spirit of the law, if you knew that your company wasn’t going to get a contract and that the competitor was going to , most likely, buying their stock pre-announcement wasn’t against the SEC rules. After all, you didn’t know if a sale was being made, just that your company wasn’t getting it.

Every little company where I’ve worked, nearly all the CxO and above managers would trade in the stock of the competition. After all, they were as close to expert in the industry as existed.

If you work in IT and your company uses specific products from different vendors, trading in the stock of those used vendors is a common thing. OTOH, if there are many competitors in the same sub-sub-space their product addresses, my personal experience is that this is less than a 50% bet. I’ve been burned by IT stocks a few times.

Clive Robinson December 1, 2023 2:59 PM

@ emily’s post, ALL,

“Braided and Knotted Stocks in the Stock Market: Anticipating the flash crashes”

Begs the question,

“How many knots to make a jumper?”

David Leppik December 1, 2023 3:17 PM

If you think of a LLM as an actor who plays a doctor on TV doing improv in the doctor role, this sort of thing is completely unsurprising. This LLM is using its training data of real and fictional traders to respond to prompts. Expecting it to even know what ethics is—rather than just being able to regurgitate & recombine human descriptions of ethics—is way beyond its capability.

lurker December 1, 2023 4:13 PM

@emily’s post

Even if a current LLM had read those papers, it’s not likely it would have the math chops to do anything with the knowledge. Machines which could profit from such juggling will be locked away in Wall St. backrooms out of sight of the rest of us.

emily’s post December 1, 2023 4:34 PM

@ Clive Robinson

How many knots to make a jumper?

Nobody is exactly sure, but one thing is certain, if one gets it badly wrong, it will make one a jumper (à la 19/9).

vas pup December 1, 2023 5:42 PM

MIT:
https://www.technologyreview.com/2023/10/26/1082398/exclusive-ilya-sutskever-openais-chief-scientist-on-his-hopes-and-fears-for-the-future-of-ai/

Long but very interesting article. Below is extract directly related to blog subject

“While others wrestle with the idea of machines that can match human smarts, Sutskever is preparing for machines that can outmatch us. He calls this artificial superintelligence: “They’ll see things more deeply. They’ll see things we don’t see.”

!!!!!!!!Together with Jan Leike, a fellow scientist at OpenAI, he has set up a team that will focus on what they call superalignment. Alignment is jargon that means making AI models do what you want and nothing more. Superalignment is OpenAI’s term for alignment applied to superintelligence.

=>The goal is to come up with a set of fail-safe procedures for building and controlling this future technology. OpenAI says it will allocate a fifth of its vast computing resources to the problem and solve it in four years.

“Existing alignment methods won’t work for models smarter than humans because they fundamentally assume that humans can reliably evaluate what AI systems are doing,” says Leike. “As AI systems become more capable, they will take on harder tasks.” And that—the idea goes—will make it harder for humans to assess them. “In forming the superalignment team with Ilya, we’ve set out to solve these future alignment challenges,” he says.

“It’s super important to not only focus on the potential opportunities of large language models, but also the risks and downsides,” says Dean, Google’s chief scientist.

…for Sutskever, superalignment is the inevitable next step. “It’s an unsolved problem,” he says. It’s also a problem that he thinks not enough core machine-learning researchers, like himself, are working on. “I’m doing it for my own self-interest,” he says. “It’s obviously important that any superintelligence anyone builds does not go rogue. Obviously.”

=> he has an exemplar in mind for the safeguards he wants to design: a machine that looks upon people the way parents look on their children. “In my opinion, this is the gold standard,” he says. “It is a generally true statement that people really care about children.” (Does he have children? “No, but I want to,” he says.)[100% disagree – we have many examples when parents do not treat their children properly including physical and sexual abuse and even murder. I’ll suggest a machine that looks upon people as people look on their pats.- that just my opinion only -vp]

“One possibility—something that may be crazy by today’s standards but will not be so crazy by future standards—is that many people will choose to become part AI.” Sutskever is saying this could be how humans try to keep up. “At first, only the most daring, adventurous people will try to do it. Maybe others will follow. Or not.”

C December 3, 2023 4:17 PM

“[…] They will get extremely rich and buy yachts and otherwise live a nice artificial life and never bother to enslave or eradicate humanity. […]”

Because accumulating unspeakable riches for no other reason that it could is completely harmless and doesn’t crush the dreams and lives of huge populations.

Chris Becke December 4, 2023 12:57 AM

“In forming the superalignment team with Ilya, we’ve set out to solve these future alignment challenges,” he says.

So, the Chimps are planning to “align” their future zookeepers?

Andy December 5, 2023 12:12 AM

It’s just a matter of how you weigh benefits and penalties. If the penalty for the insider training is X > 0 otherwise, it went for expected value: since (1-p)Y must have been greater than p|X|.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.