Remove training
article thumbnail

Extracting GPT’s Training Data

Schneier on Security

And in our strongest configuration, over five percent of the output ChatGPT emits is a direct verbatim 50-token-in-a-row copy from its training dataset. This happens rather often when running our attack. Lots of details at the link and in the paper.

article thumbnail

Remotely Stopping Polish Trains

Schneier on Security

Turns out that it’s easy to broadcast radio commands that force Polish trains to stop: …the saboteurs appear to have sent simple so-called “radio-stop” commands via radio frequency to the trains they targeted. “It is three tonal messages sent consecutively. “Everybody could do this.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

OpenAI Is Not Training on Your Dropbox Documents—Today

Schneier on Security

There’s a rumor flying around the Internet that OpenAI is training foundation models on your Dropbox documents. If OpenAI are training on data that they said they wouldn’t train on, or if Facebook are spying on us through our phone’s microphones, they should be hauled in front of regulators and/or sued into the ground.

article thumbnail

'Cybersecure My Business' Program Trains SMB Owners to Manage Cyber Risk

SecureWorld News

The National Cybersecurity Alliance has launched Cybersecure My Business, a training program for non-technical owners and operators of small- to medium-sized businesses (SMBs) on how to manage cyber risk in their business. Visit Cybersecure My Business for more information and to join the waitlist for the training program.

article thumbnail

Damn Vulnerable RESTaurant: An intentionally vulnerable Web API game for learning and training

Penetration Testing

Damn Vulnerable RESTaurant An intentionally vulnerable API service designed for learning and training purposes dedicated to developers, ethical hackers, and security engineers.

article thumbnail

Manipulating Machine-Learning Systems through the Order of the Training Data

Schneier on Security

Yet another adversarial ML attack: Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order. So what happens if the bad guys can cause the order to be not random?

article thumbnail

News alert: Security Journey accelerates secure coding training platform enhancements

The Last Watchdog

Pittsburgh, PA – July 13, 2023 – Security Journey, a best-in-class application security education company, has today announced an acceleration of its secure coding training platform enhancements. Additional Pre-built and Customizable Learning Paths – including multiple training formats to drive engagement.

Education 170