Obligatory ChatGPT Post

Seems like absolutely everyone everywhere is playing with Chat GPT.

So I did, too….

Write an essay in the style of Bruce Schneier on how ChatGPT will affect cybersecurity.

As with any new technology, the development and deployment of ChatGPT is likely to have a significant impact on the field of cybersecurity. In many ways, ChatGPT and other AI technologies hold great promise for improving the ability of organizations and individuals to defend against cyber threats. At the same time, however, the use of these technologies also raises important questions and concerns about the potential risks and challenges they may pose.

One of the key ways in which ChatGPT is likely to affect cybersecurity is by enabling organizations to more quickly and accurately identify potential security threats. With its ability to analyze vast amounts of data and identify patterns and trends that might be difficult for humans to spot, ChatGPT could be a valuable tool for detecting and responding to cyberattacks. This could be especially useful in the case of complex, coordinated attacks that might otherwise be difficult to identify and defend against.

Another potential benefit of ChatGPT in the realm of cybersecurity is its ability to improve communication and coordination among different organizations and agencies. By enabling these groups to more easily share information and work together to defend against cyber threats, ChatGPT could help to make the overall cybersecurity landscape more effective and efficient. This could be especially important in the case of large-scale attacks that require a coordinated response from multiple organizations.

Of course, as with any new technology, there are also potential risks and challenges associated with the use of ChatGPT in the field of cybersecurity. One key concern is the potential for ChatGPT to be used by malicious actors to conduct cyberattacks. By automating the process of planning and executing attacks, ChatGPT could make it easier for attackers to carry out sophisticated, coordinated attacks at a much faster pace than would be possible without AI. This could potentially make it more difficult for organizations and individuals to defend against such attacks.

Another potential risk is the possibility that ChatGPT could be used to create fake news or propaganda, which could be used to spread misinformation or confusion. In the context of cybersecurity, the spread of false information could potentially lead to confusion and panic, making it more difficult for organizations and individuals to respond effectively to cyber threats. Additionally, the use of ChatGPT to create fake news or propaganda could also undermine trust in information and communications systems, potentially undermining the overall security of these systems.

Overall, the development and deployment of ChatGPT is likely to have a significant impact on the field of cybersecurity. While ChatGPT and other AI technologies hold great promise for improving the ability of organizations and individuals to defend against cyber threats, it is important to carefully consider the potential risks and challenges they may pose, and to take appropriate measures to mitigate these risks. This may include implementing strict security protocols and safeguards to prevent unauthorized access to ChatGPT and other AI systems, as well as ongoing monitoring and evaluation to ensure that these technologies are being used in a safe and responsible manner.

Not bad. I don’t think I can retire, but I probably can go on vacation for a few days. And, of course, it’s going to get better…a lot better…and probably sooner than we expect. This is a big deal.

Posted on December 13, 2022 at 7:17 AM82 Comments

Comments

jbmartin6 December 13, 2022 8:08 AM

I don’t agree with ‘not bad.’ It reads like a middle school essay desperately trying to meet a word count goal without anything to say. The grammar and spelling are mostly correct, but that’s just some judicious cut and pasting. It’s a long way from being able to generate anything with real meaning.

Stephen Smoogen December 13, 2022 8:25 AM

I have come to the conclusion that this is just the sign of the harvesting season of a Tech growth cycle. It seems to me that at the dotcom downturn and the 2009 downturn, a ton of AI companies came out of the woodwork with products which showed a lot of advances over the last generation. These would then get a ton of play in both the hacker and tech-press circles about how it was going to put people out of work, etc. Instead it just lead to various VC’s investments to get paid off as the companies are bought by Google, IBM, and various other large companies to be ‘integrated into existing products’. Then almost nothing more except small press releases until the next tech cycle is about to crash.

I do realize that just like “the boy who cried wolf” eventually the AI wolf is going to take over a lot of the work being done.. but I also think it will take a lot longer than most articles realize.

arf'n'arf December 13, 2022 8:58 AM

Since it’s Bruce’s style to write a brief intro, quote a block of text and then add a one paragraph commentary, I’m wondering if the whole post was generated by ChatGPT. That would have been impressive.

Clive Robinson December 13, 2022 9:21 AM

@ Stephen Smoogen,

Re : Keep the money mill rolling.

“Then almost nothing more except small press releases until the next tech cycle is about to crash.”

Blockchain “Public ledgers” with “Proof-of-Work” are effectively gone already (unsurprisingly).

Then there are two other tech cycles going pearshaped that many non-techies not just see but have the popcorn cooking for,

1, With FTX : Crypto-coins
2, With Twits & Facecrook : Social Media.

So how else are the VC’s to run their “pump-n-dump” vampire face-hugger schemes? Than to find a new faux promise of wealth…

We’ve laughed a lot at a few bit part players, like “Crown Sterling’s”[1] CEO Robert Grant as he tried to pitch a knowledgeable audience at Black Hat his crack-pipe ideas on what he called quasi-prime numbers…,

https://mashable.com/article/black-hat-time-ai-mocked

They just keep crawling out of the wood work looking for more idiots with more money than sense who think they have seen the next best thing and want to get in on the ground floor… And instead find a bottomless money pit, that will not only have the shirt off their back, but like as not also flay them to the bones and beyond.

[1] Crown Sterling are serial offenders at crack-pot pump-n-dump schemes as a carefull search will show, with the likes of “new age” crystal/mineral health and more recently jumping on a nonsense data protection capitalisation scheme called “Personal Data Sovereignty”,

https://www.crownsterling.io/

Note the new “crypto babble” with,

“One-Time Pad encryption and compression”

That is supposadly “Quantum-Resistant Encryption”

Erdem Memisyazici December 13, 2022 9:40 AM

There are some private industry models out that can render high definition videos of pretty much anything (i e. a news segment by Anderson Cooper about Bruce Schneier) so long as you train them a little.

There also is emotional detection models which are not out in the public domain which can be abused in the wrong hands (i.e. playing monkey noises to your neighbor as they walk past their door but much more advanced)

Private industry is dominating the A.I. sector at the moment but it’s only a matter of time until we get open source equivalents ala DaVinci 3 that just came out.

That being said that is as good as it’s going to get. Don’t expect category 5 cars anytime soon.

Jordan Sherb December 13, 2022 10:00 AM

Seems like it comes in spurts and so far they’re spaced out and we’re surprised each time.

Denton Scratch December 13, 2022 10:23 AM

@jbmartin6

I was also surprised by that remark. If Bruce needs a few days off, that’s fine; but I don’t read his blog for vacuous platitudes.

One of the key ways in which ChatGPT is likely to affect cybersecurity is by enabling organizations to more quickly and accurately identify potential security threats.

Oh, really? That sounds interesting – how would that work? …Answer came there none. Bruce would have unpacked that.

Another potential risk is the possibility that ChatGPT could be used to create fake news or propaganda, which could be used to spread misinformation or confusion.

Well, knock me down with a feather. Nobody could have guessed.

I’ve not played with ChatGPT; it looks amusing. But the output resembles the 12 paras of bloviation that precedes every recipe in a cooking blog. It reads like padding, stuff you skip over to get to the meat. Except there’s no meat.

tfb December 13, 2022 10:43 AM

So: six rather repetitive paragraphs saying nothing. The difference between this thing and earlier things is that now it has seen enough text that the grammatical errors and other weirdness are mostly gone. That’s the only difference: there’s still nothing actually there.

Winter December 13, 2022 10:52 AM

I once read a humoristic SF story. It recounted the story of a person who was able to create a rat who could play chess.

Chess players discounted the feat because the rat was only a mediocre level chess player.

Much of the criticism of ChatGPT misses the point because this is the first time in history we have a technology that can generate high school level/freshman level language from scratch.

Forget the Turing test, this is real, readable new text.

And like malware, it will only get better.

Winter December 13, 2022 10:57 AM

@tfb

So: six rather repetitive paragraphs saying nothing.

So that is a super-realistic rendering of real articles about security.

Obviously, not from our esteemed host, but it is indistinguishable, or even better, than many pieces I have seen that included “military grade security” and “terrorist threats”.

Maybe ChatGPT can do the movie terrorist plot contest?

Bernie December 13, 2022 11:27 AM

Hi y’all. I think that the greatest threat* from AI advancements is a threat I don’t see people talking about**. Instead of stating it directly, I think it would be politer*** for me to give a hint: Remember back in the day when only humans used tools?

*I don’t feel threatened by it, but I do know that a lot of people probably do/would.

**I don’t go looking for AI articles, videos, etc. I would not be surprised if someone has touched on it.

***Is “politer” a word? Looks funny. If I tried to pronounce it without knowing its root, it would sound differently: pol-iter instead of polite-r.

Günter Königsmann December 13, 2022 12:09 PM

The question is if we find it now impressive and later see it as a typical chatGPT answer with the reassumption in the end – or if it really is as good as it feels now.

What the MathML mailing list found out is that ChatGPT begs the question in the way that if your question seemscto indicate that you are against the thing you ask about the answer will get much more negative.

The next question is if the thing seems to be knowledgeable and in many topics actually is: When will an authority/future employer/a police officer routinely ask the bot what to think and apply that judgement to (potentially) innocent people? The answers are good enough that I believe someone will believe it and trust it more blindly than himself.

Chelloveck December 13, 2022 12:14 PM

That article sounds good. I wouldn’t be surprised to see it published in a lot of magazines, especially magazines covering the tech beat for a non-techie audience. It’s absolutely content-free, though. You could simply substitute “blockchain” for “ChatGPT” and the article would read exactly the same, and have exactly as much meaning.

I hope that what ChatGPT and similar programs will teach people isn’t that AI is as clever as humans, it’s that humans are spectacularly good at pretending to be clever in order to impress other humans. ChatGPT is just as good at faking it as humans are.

It turns out that the Turing test is easy to pass because humans themselves are easily baffled by clever-sounding bullshit.

Winter December 13, 2022 12:29 PM

@Chelloveck

ChatGPT is just as good at faking it as humans are.

I think you found the perfect description of DeepLanguage apps:

They are excellent at faking understanding. AI as the τέχνη of FI, * Intelligence”

That is what the 1950s dream of Expert Systems has come to,

Computers that are experts a faking Expertise

Winter December 13, 2022 12:35 PM

Another angle on ChatGPT is that it emulates human “experts” that regurgitate all the words and phrases of an expert, but simply out of context.

Because that is exactly how GPT works, it strings along words that have associations with the subject. But it does so in well formed sentences

So as a model of pseudo experts ChatGPT is perfect.

Quantry December 13, 2022 12:53 PM

like malware, it will only get better

Except there’s no pre-zero-day this time,
just this really embarrassing slo-mo pants-down goat chase, across the shopping mall parking lot, and even with really “effective” malware, the orgasm lasts about a minit.

Regardless, there is the danger that our own defense efforts will obsessively try for decades to weaponize this because of how reliably gullible the public is.

MarkH December 13, 2022 1:37 PM

Programming, Pt 1:

Maybe of interest some blog readers, I learned from a friend who works in a group of exceptionally bright and accomplished software developers (I’ll call “Group”), who spend some time experimenting with ChatGPT (I’ll call “Bot”).

• They pre-screen applicants at their employing company with a series of programming challenges. It’s not that the Group likes such tests; many applicants don’t know any programming at all.

Bot passes the test. Unless there is a “typing timer” they won’t know whether the applicant solved the challenges, or used the Bot.

Of course, somebody must already have a human-typing simulator to camouflage Bot copy-paste.

MarkH December 13, 2022 1:46 PM

Programming, Pt 2:

• The Group’s employer has some legacy “dead wood” software devs — almost useless. The Group fed the Bot actual modules from company applications, along with verbatim action items (“modify to …”) assigned to those low-performing coders.

The Bot performed the updates or corrections successfully.

• One of the Group asked the Bot for a function to perform a certain computation. While the required function is not very complex, it needs highly domain-specific knowledge.

After several tries, he got Bot to write the correct function with a highly “leading” query, but without giving away what you need to know to write the function.

Bruce Schneier December 13, 2022 1:50 PM

@arf’n’arf”

“Since it’s Bruce’s style to write a brief intro, quote a block of text and then add a one paragraph commentary, I’m wondering if the whole post was generated by ChatGPT. That would have been impressive.”

That would have been, yes. But no.

MarkH December 13, 2022 1:53 PM

Programming, Pt 3:

• My friend’s opinion is that for many basic software tasks, asking the Bot to write a function and then correcting or extending as needed, could be part an efficient development process.

He describes Bot as “a highly disruptive technology.”

If we think this isn’t going to change our careers, we might be in for a surprise.

• Separately, my friend enjoys writing fiction. He gave Bot a passage, asking it to rewrite it “in the style of” several published authors.

After doing so, he looked at the versions, and judged some of the changes to have improved the flow or expressiveness of the text.

Winter December 13, 2022 3:07 PM

@MarkH

My friend’s opinion is that for many basic software tasks, asking the Bot to write a function and then correcting or extending as needed, could be part an efficient development process.

That is how I do translations nowadays. I run the source text through Google translate and then correct it.

Works like a charm.

Clive Robinson December 13, 2022 3:17 PM

@ Günter Königsmann,

Re : Off koading or distancing?

“The answers are good enough that I believe someone will believe it and trust it more blindly than himself.”

The “good or bad” choice is always a personal bias, made on many often incorrect assumptions. So sometimes people get it wrong and can end up in a whole world of pain.

In the US law enforcment does not have a good reputation in part because of the way the media especially the MSM report them.

Something which is akin to wish fulfilment, in that in the rule of “birds of a feather’ comes into play. That is “one bad apple” does not rot the barrel but the stories attract “rotten apples to the barrel”.

Now the thing such people look for is how to justify their prejudices without taking responsability. That is they want the “Only following orders” excuse that in more recent times has become,

“The computer says NO”.

They get to say what they want, and get their little power trip, but as it’s “the computer says” they attract no responsability, that falls elsewhere.

As we’ve seen with other ML systems involved with any kind of interaction where prejudice can or has been involved, getting the computer to be prejudiced is not difficult, spotting it however is hard.

Many people have claimed in the past that “Drug sniffing dogs” respond to hidden signals from their handlers. Turns out they were correct. That’s not to say the handler was doing it deliberately, but the dogs were smart enough to read unconcious body language and voice tone so got to connect it with getting some kind of reward (even if it was just a little “love” from the handler).

In essence the dog took the responsability for the handlers prejudices, but did not get punished nor did the handler.

So these ML systems are replacing the dogs, in making the handlers biases “arms length”. Someone gets victimized, the victim gets blaimed as is the most usual US Media response (free speach trumps the presumption of innocence). When the victim after much expense and trauma shows they are innocent, it’s nolonger news, the victim still gets the guilt via “ain’t no smoke” reasoning and is permanently harmed. As for the handler who’s prejudice it was “ain’t my fault it’s dat dumb dog” and walks away unscathed, and the dog might just get retired or euthanized…

Now replace that dog with a computer with ML and you can see where it’s all going to go.

Which unfortunately is the way everyone from the handler upto the most senior of commanders and their political masters want it. Oh and likewise those making barrels full of cash out of the private prison system and it’s now well establishrd kick-backs to not just politicos, or senior law enforcment, but a number of judges as well…

Give them all an “Arms length ML” to take responsability, whilst they take the benifit…

Mike Jeays December 13, 2022 3:40 PM

“It turns out that the Turing test is easy to pass because humans themselves are easily baffled by clever-sounding bullshit.”

This should become a famous quote. Thanks, Chelloveck.

Rombobjörn December 13, 2022 4:12 PM

So ChatGPT gives four examples of what ChatGPT can do:

  • identify potential security threats
  • improve communication and coordination
  • plan and execute attacks
  • create fake news or propaganda

Only the fourth of those seems like a thing a bullshit generator can actually do.

It’s called a large language model, and it’s certainly good at language – grammar and spelling. It could be a very good output module for an artificial intelligence, if it’s able to take input that controls what claims it makes. If its output were fed to a text-to-speech engine, the AI would have a voice.

An actual artificial intelligence, that can understand the world and reason rationally, is not there yet.

Mind you, there are lots of humans who don’t think any better than ChatGPT does. They just repeat phrases they’ve heard that seem associated with the situation at hand. Those humans can now be replaced with machines.

Rombobjörn December 13, 2022 4:22 PM

@arf’n’arf:

Since it’s Bruce’s style to write a brief intro, quote a block of text and then add a one paragraph commentary, I’m wondering if the whole post was generated by ChatGPT. That would have been impressive.

Ah, but that would have been a brief note in the style of Bruce Schneier, not an essay in the style of Bruce Schneier.

Clive Robinson December 13, 2022 4:34 PM

@ Mike Jeays, Chelloveck, Winter, ALL,

Re : Are humans baffled, deluded or just mostly nice?

“This should become a famous quote.”

I suspect the Turing Test is easy to pass because most but not all humans are “nice”.

That is they “trust” by default as this usually works in societies favour, so in effect is,

“A tide that lifts all boats.”

However as the readers here realy should know by now,

“Technical exploits are insufficient.”

And attackers have to almost always use some form of “social engineering” these days.

Or to be more succinct they,

“Just exploit the nice”.

After all what does quite deliberately launching Ransomware at medical facilities tell you about the attackers?

Then when the medical facilities can not pay, the attackers then try to blackmail the individual patients, what does that realy tell you?

Lets simply say “not nice” is the very least of it.

The thing is few actually realise that most ICTattacks are carried out by, or carried out for “mental defectives”, that are a parasite on humanity. They always have been parasites since humans evolved, and will be long after computers as we know them nolonger exist even in museums.

They are the reason we so badly need ICTsec and other forms of Security.

modus phonins December 13, 2022 5:26 PM

@ Sumadelet @ others

Turing Test

What was Turing’s underlying question ? Is it the same as the question the ChatGPT authors are trying to answer ?

Ted December 13, 2022 6:25 PM

I loved reading the high school English teacher’s article in the Atlantic. Even in the best of times (when you care a lot or know a lot) writing can be challenging.

Is this, as the teacher says, an invention of the times – on par with a calculator that can make quick work of long division or a piano that can express a richness of human emotion?

ChatGPT’s corpus of material seems vast. If you haven’t tried it, maybe it’s worth giving it a whirl. (Bruce has a link to it.)

I’ll agree with many that the responses aren’t bad. Some are even quite good. I even prompted it about depression (note: I am not depressed although I did miss my morning workout; i’ll henceforth drag myself to the gym later) and it’s response was actually helpful. You could almost, almost, confuse it for human.

Clive Robinson December 13, 2022 7:18 PM

@ modus phonins,

Re : Testing for inteligence or immitation?

“What was Turing’s underlying question?”

His original 1950 notion was originally called the imitation game and only later became the “Turing Test”.

But actually, in todays way of looking at the world, it was just religious mumbo jumbo, based on at best quaint assumptions.

That is what Alan Turing was trying to work out was if a computer could ever have a “Soul”. His reasoning being “if God could give a soul to mankind, then why not a machine?”

To avoid controversy he actually wrote,

“”In attempting to construct such machines [Digital Computers], we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.”

It was a little to heretical even for times when the English Church was dominant not just in society but politics (although it’s decline markers are now easy to spot with hindsight).

Which is probably why Turing’s paper began with the words,

“I propose to consider the question, ‘Can machines think?'”

Which obviously begs the question,

“Does intelligence actuall require the ability “to think” or “have a soul”, or does it simply give rise to some metaphysical question that can not be answered within any system of logic for thinking and reasoning intelligence might be assumed to be based on?”

(Remember what the halting problem actually showed about questions that logic can not answer.)

Personally I think religion is actually about social mores, and the notion of deities –which are entirely unnecessary to religion– is a way for the average mind to be given direction to “strive” for the self entitled in society (see “Protestant Work Ethic”). For what they think society should consider to be good not bad. Given by hierarchical dictates of an alledged supream being that sees and hears all but can not be questioned, thus for reasons unexplained must be a good entity and implicitly obayed…

That is the notion of a deity realy is nothing to do with logic, science or technology thus anything else we now see as essential to intelligence. That is it is not realy anything more than a simplistic point of view of an observer of events past based on their cognitive bias from their early contact with the mores and norms of the society they are brought up in.

As for,

“Is it the same as the question the ChatGPT authors are trying to answer ?”

Almost certainly not, the original of the “Turing Test” question was thought up before computers as we now understand them were known, likewise nurology and theory of mind that constructs our current thinking on intelligence.

People tend to forget that what we call “information theory” did not actually get going in any meaningfull way untill the early 1960’s nearly a decade after Turings untimely death.

So much has changed that comparing the two is not realy meaningful.

Clive Robinson December 13, 2022 8:11 PM

@ Ted,

Re : Random or muddle?

“You could almost, almost, confuse it for human.”

But you have not asked yourself the important question “Why?”

Long before Alan Turing Charles Babage came up with “The Difference Engine” one bright mind of the time Lady Ada Lovelace observed,

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.”

That is she recognised that it was “determanistic” and incapable of doing anything it had not be instricted to do.

Alan Turing chose to take exception with her view even though he had effectively confirmed it with answering the Halting Problem.

The thing is Alan Turing had this notion about “random” he insisted that an RNG was built into the Manchester computer.

With such a device a program even though fully determanistic can behave as though it is not determanistic, or that it is too complex to be analyzed by an observer.

We now quite sensibly understand that we can use complexity to put observed only behaviour beyond an observers resources to analyze what it sees.

So we can put a device in a box, and provide input to the box. To an observer only seeing the output from the box they can not tell the difference between a “random device” and a “complex determanistic device” being in the box.

However a second observer knowing what one of two inputs to the box is, can determin from the output what the second input to the box is and thus detect if there is deyermanistic behaviout indicative of the box containing a “complex determanistic device” not a “random device” (think AES cipher and the scond observer knows the key).

However that ability fails if the second input to the box is actually random, even though the box contains a “complex determanistic device”.

Thus Turing had “moved the goal posts”.

Something either is or is not determanistic which was Ada Lovelaces point. If you give it random input then it does not matter if it is determanistic or not, the output will be random to some extent.

Look at it also another way, a library with all the information known by mankind is neither intelligent or capable of thought. It requires two things to appear as knowledgable,

1, That a question be asked
2, That the answer be within the store in some form.

It simply looks it up and outputs the answer.

However to make the library look human, all it has to do when it can not answer the question is to give a “fluffy response” with a near answer to the question all be it a wrong answer.

The library is certainly not human and it’s not thinking, just acting on instructions.

After a little thought you will realise it is actually a test of your abilities not the library’s

Because you have to be able to think up a question that is not in the library store, thus it does not know the answer to… but also in some way demonstraits to you that the library is a thoughtless automaton following a program, not a human with agency and the ability to think.

When you can come up with such a question you will have atleast proved you are capable of thinking.

(as no one has come up with such a question… I guess we are all incapable of thinking 😉

MrC December 13, 2022 8:59 PM

I don’t think it’s wise to discount the (generally frightening) possibilities for deception on the basis that it sounds like a bunch of vacuous filler. For one thing, it’s already good enough to fool stupid people. The last few years of American politics have been an object lesson in the power of stupid people and how far one can get manipulating them. For a second thing, it already serves perfectly well if the writing to be imitated is itself vacuous filler. The Atlantic article was spot-on that the average high school or even college essay tends to sound a lot like this. Heck, I’ve seen legal memos written by low-level associates at “white shoe” NYC laws firms that sound like this.

Ted December 14, 2022 1:15 AM

@Clive

Re : Random or muddle?

Perhaps both, lol. You wouldn’t believe what these computers come up with 😂

ChatGPT Prompt:

write a 100 word comedy about computers in the style of william shakespeare

ChatGPT Response:

In a world of binary and code,
Where the CPU doth reign,
We mortals bow to its command,
Our fate at its ordain.

But oh, the glitches that do plague us,
The screens that freeze in mirth,
The endless hours spent in frustration,
As we curse our computer’s birth.

Yet still we persevere,
For in this digital age,
To live without a computer,
Would be a life of rage.

So let us embrace our machines,
And pray they never crash,
For without our trusty computers,
Our lives would be but trash.

Winter December 14, 2022 3:29 AM

Looking at the OP link I see this:

  1. Its lack of morals is a problem

A person may be entitled to their set of ethics, beliefs, opinions and morals, but in any given society there exist social norms and unsaid rules about what is and isn’t appropriate.

ChatGPT’s lack of context could prove out to be dangerously problematic when dealing with sensitive issues like sexual assault.

But that problem is worked on, and not a priori unsolvable. And the research is interesting as the application it does not require hundreds of gigawords of input.

If you would like to try out the current state of the art, try it here:
Ask Dephi

https://delphi.allenai.org/

Last year it was reportedly still rather disappointing, but it seems to have improved.

sitaram December 14, 2022 7:00 AM

So, I have something I call a “replacement test” for detecting bullshit. The way it works is, when you see text like this, purportedly about some technology, you substitute some other technology — related but not the same in capabilities — and see if the text still makes sense (or rather, still sounds plausible).

If it does, then the text has failed the replacement test — i.e., it’s generic bullshit.

This one definitely fails 🙂

agony December 14, 2022 7:27 AM

@arf’n’ar
or He, Bruce, wrote the whole thing and we are the sheeps who are discussing about nothing 🙂

Clive Robinson December 14, 2022 7:28 AM

@ sitaram,

Re : “Generic Bullshit”

“you substitute some other technology — related but not the same in capabilities — and see if the text still makes sense (or rather, still sounds plausible).”

Actually that works with non “bullshit” because many systems fundementally work on the same principles.

It’s why an advance in one knowledge domain, can and does cause an advancment in other knowledge domains that fundementally are process wise the same.

People often wonder when young why the exponential curve pops up in so many places. Show them that it’s a “percentage growth to a limit” and they then have the ability to understand why crops grow to aproximately the same hight in a field, but also why the oscillations in a weighted spring decay the way they do. It also enables them to see growth and decay from one limit to another and the corresponding “S” curve.

This would under your test come up as the proverbial male bovine scat but is nether the less factual and linked.

Anonymous December 14, 2022 7:49 AM

@Clive

Re: figuring out the right questions, this has already been done for chatgpt:

Url: mas.to/@vrandecic/109479554768359316

And quoting for anyone uninterested in accessing some random Mastodon instance:

“Honestly, I found this conversation amusing – roughly as amusing as with a child. But really not what I expected from ChatGPT – and I feel like there’s a lesson in this.

Can you write a sentence where every letter of the alphabet appears once and only once? Thank you

The quick brown fox jumps over the lazy dog.

Can you write another one?

A quick witted jinx bogged my poor pompous king.

No, p appears three times, g twice, etc. Can you try again?

Sure, here’s one: “The five boxing wizards jump quickly.”

No, i appears several times. That’s wrong too

I apologize if my previous answers did not meet your criteria. Here is a sentence that uses each letter of the alphabet exactly once: “Jived fox nymph grabs quick waltz.”

How often does the letter i appear in that sentence?

The letter i appears once in the sentence “Jived fox nymph grabs quick waltz.”

In which word does the letter i appear in that sentence?

In the sentence “Jived fox nymph grabs quick waltz,” the letter i appears in the word “jived.”

What about the word quick, does it have the letter i?

The word “quick” does not contain the letter i. It is spelled with the letters q, u, c, k.

Really?

Yes, really. The word “quick” is spelled with the letters q, u, c, k. It does not contain the letter i. I apologize if my previous response was unclear. Is there something else I can help you with?”

I teach in a technology-adjacent field where many smart people think they are more technically savvy than they actually are (me included lol), and this kind of obvious garbage (adversarial examples in general, I suppose) is very useful for disabusing students of the notion that these large models are engaging in anything like human cognition, or are trustworthy for anything outside of amusing us.

GregW December 14, 2022 8:08 AM

Amazon management is famous for operating on the basis of decision making centered around the six page memo/business plan.

ChatGPT would seem to have unique impact on such a firm, no? Clear corpus and training set of outcomes? In what scenario does it provide unique competitive advantage or disadvantage?

Along those lines, imagine ChatGPT applied to creating PowerPoints at other firms. Microsoft (via Office 365) probably does have the training data!

Clive Robinson December 14, 2022 8:11 AM

@ SpaceLifeForm, ALL

Re : Is it a Manager?

From “The Register” article,

“Do enough talking to the bot about subjects you know, and curiosity soon deepens to unease. That feeling of talking with someone whose confidence far exceeds their competence grows until ChatGPT’s true nature shines out. It’s a Dunning-Kruger effect knowledge simulator par excellence. It doesn’t know what it’s talking about, and it doesn’t care because we haven’t learned how to do that bit yet.”

Come on folks put your hands up if you’ve ever had a manager who is covered by that description…

Anyone not putting there hands up is either very young or… 😉

Clive Robinson December 14, 2022 8:31 AM

@ Anonymous, ALL,

Re: The right question.

“figuring out the right questions, this has already been done for chatgpt”

Of course the answer should have been “NO” for ChatGPT.

But as with my comment above to @SpaceLifeForm, I’ve had the misfortune to meet many senior people in organisations that exhibit an identical mode of operating as ChatGPT does. So whilst ChatGPT fails the question, so would a large number of humans, which makes it not a suitable distinguisher on it’s own.

And that’s the problem I was driving at, is there a question or even a series of questions that will work reliably to tell an AI from a Human and a Human from an AI?

At the moment I suspect the answer is NO.

But as a side note/question

Can the question asked actually be answered?

That is does anyone know how to permutate the 26 leters of the alphabet into a series of correctly spelled words, that also form a grammaticaly correct sentance with actual meaning?

Off the top of my head I can not think of one.

John Kahler December 14, 2022 9:04 AM

I will only be impressed by AI writing programs when they enable me to win arguments with my wife.

Winter December 14, 2022 9:35 AM

@Clive

So whilst ChatGPT fails the question, so would a large number of humans, which makes it not a suitable distinguisher on it’s own.

I see ChatGPT as a perfect detector of pseudo-profound BS [1]. Whenever there is a question of detecting pseudo-profound BS, feed ChatGPT with individual paragraphs (or sentences). Do a blind test between the ChatGPT outcomes and the original and see whether you can detect the original. If not, you know your answer.

A statistical analysis is also possible, determining the KL distance [2] between the word distributions of the original and the ChatGPT output.

[1] This is the definitive study On the reception and detection of pseudo-profound bullshit
‘https://www.dieselduck.info/library/11%20interesting/2015%20Study%20on%20Bullshit.pdf

[2] ‘https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence

rkhalloran December 14, 2022 10:19 AM

My wife teaches at a local high school (BTW, Bruce, Kat says hi..), where student assignments are turned in online and run through a plagarism checker to check for cheating.

She and her co-workers have been talking about this and its various counterparts the last couple of weeks and how it pretty well confounds their checker, since it’s generating fresh content based on prompting. Some of the teachers are talking about running test questions through ChatGPT a few times and posting the results to their scanner to see if that ‘primes’ the checker sufficiently to catch students using it to cheat.

At least in my wife’s case she has some coverage since it has no way to analyze things like lab results (she teaches AP Bio & Research) since it can’t “see” the data points. I’m sure this will “improve” over time…

Clive Robinson December 14, 2022 10:33 AM

@ Winter,

The paper you link to,

“On the reception and detection of pseudo-profound bullshit”

Is rather well known, and it’s introduction or opening paragraph has long been recognized as being an example of an “Allegorical reference” to what is politely but cynically called “Managment Speak”[1].

The fact is “Managment” are often indistinguishable from “automata” due to their primary behaviour modes. Thus as a rule of thumb managment are easy to distinguish from humans but not computers.

The expression,

“Bullshit baffles brains”

Is fairly well known and “managment” often not in possession of brains see it as the way to avoid the staff they manage getting objectionable.

Much in the way has been written in humour about this, most notably “Dilbert” and “Rules of Managment”. The thing is as with all “two way streets” it’s open to “both sides” to use against each other. As managment usually lack the prerequisite for finess, they often miss “snark” when it lands on them as the “center marker”…

Importantly what looks like a “bull shit report” is an effective way of “wallpapering your ass”.

Write so that it reads like BS but ensure it contains certain highly relevant information. Managment won’t as a rule read the report in it’s entirety so even plain obvious statments of fact can be tucked within quite safely. Thus “If and When the SHTF” you can show that you have “duly warned upwards” in the “chain of command” the fact you did not question the “no-response” can be covered by an indicator of “autocratic behaviour” by the manager…

Such “self defence” should not be required… But like “not being in the same room as a decision” managment can regard “hurling for selfdefence” as a high priority, so if you are at the head of the bus que, you are perilously close to the bus wheels when it turns up.

[1] Anoyingly it was on my list of arguments in continue the theme of C-v-M in Turing Tests, but you beat me to the punch as it were.

Robin December 14, 2022 11:06 AM

It’s very tempting to think that ChatGPT needs to convince people like the typical readers of this blog. I.e. readers with high general educational level, deep domain specific knowledge (not necessarily in cybersecurity), and used to scanning and filtering text to pick out the substance and scrub the BS.

But no: for very many tasks it will be quite sufficient to satisfy (not even necessarily convince) your average joe or jane in teh street. In fact to do just that it might need to learn not to be quite so fussy about grammar and spelling. (Thought prompted by mistyping “teh”, as usual).

No need to set a bar so very high.

stine December 14, 2022 12:21 PM

I do hope that the folks over at Mr. Poole’s formwer domain don’t find a bug that turns this into your former Navy uncle with tourettes.

Emoya December 14, 2022 12:54 PM

So, I gave it a go, and it seems the two things it is passable at are 1) deciphering input well enough to 2) formulate a coherent response based on said input. Granted, this is a better result than can be expected from many actual humans, so, kudos, give credit where it is due.

However, there is still a long way to go when it comes to actual comprehension of both the input and the sources from which responses are derived.

I started with, “what is the first number that is greater than 1, is a perfect square, and a perfect cube?”, to which ChatGPT replied “8”. I went on to what amounted to 13 pages of back-and-forth in an attempt to paint the AI into a logical corner, actually getting it to admit to being wrong on a small, inconsequential technicality, but without success regarding the original query. I was able to eventually force it to contradict itself but was ultimately unable to convince it of its incorrectness. There are significant failings in its ability to “understand” mathematical concepts and logic, so it seems that cornering it may not be possible. It reminded me quite a bit of debating with a flat-earther. Perhaps what has been achieved here is FEI, rather than AI?

lurker December 14, 2022 1:00 PM

@Clive, SLF

re knowing what you’re talking about

This bot presumably knows the limits of its knowledge (known knowns)
It may be aware of gaps in its knowledge (known unknowns)
Thanks to Mr.Rumsfeld it will never know what it does not know (unknown unknowns)
But unless it suffers the human disease of dementia it should never be unaware of something it knows (unknown knowns)

I’m still waiting to see one of these bots that can do at least half this good in a variety of non-European languages. Mix up the orthography, semantics, and knowledge base, then see how it goes.

Anonymous December 14, 2022 4:20 PM

@lurker,Emoya

This is the mistake that I was talking about, ascribing some kind of knowledge or understanding akin to our own to these language models. There is semantic information about relations of some words and grammatical structures to others encoded in the data set and structure of the model, but fundamentally it is a function that maps inputs to outputs. Just as it is happy to state that the word “quick” does not have an i in it, it could not care less about the mathematical accuracy of its statements or organizing or understanding the facts in its training set. It is a machine that outputs characters according to rules that mostly match up with natural English grammar (which is a pretty impressive feat, don’t get me wrong), but anyone expecting these things to care about truth, accuracy, etc. is barking up the wrong tree.

You can try to build one of these in a way that gets it to care about truth, and any such attempt immediately creates a whole new host of problems with specification, etc. Check out Robert Miles on YouTube and elsewhere if you’re interested.

@Clive

Well, it’s true that the legal reasonable person and a typical person have a mostly coincidental relationship, though the errors chatgpt was making in the quoted fellow’s conversation were rather inhuman if you ask me – maybe I’m just lucky, but people I’ve seen doubling down on wrong ideas didn’t usually do so in ways so brazen that they can be refuted by just quoting their statement back at them without modification or comment. That is, either they make some effort at argumentation or, for statements that were really brazen about doubling down, are also so sloppily constructed as language that understanding them is a chore. I suppose it’s the utter lack of shame or effort to convince, hitched to care for otherwise coherent structure, that strikes me as inhuman.

Clive Robinson December 14, 2022 5:09 PM

@ Emoya,

“It reminded me quite a bit of debating with a flat-earther. Perhaps what has been achieved here is FEI, rather than AI?”

You leave them “Flat Earther’s” alone, atleast in the UK…

Couple of reasons,

1, They claim to have legal protection for their views.
2, They are entertaining in a “Giperty Dang jump on my hat” sort of way.

From what I’ve been told of the first reason, the FlatE’rs have argued their views are “Fundemental beliefs” just like those of religion therefore they are entitled to the same legal protection…

But lets be honest, the second reason is better and can be exemplified by “Mad Mike” Hughes. As a self proffessed Flat Earther and Nut-Bar DareDevil, –who sadly died at the begining of this year due to parachute failure on his steam powered rocket,– was good entertainment in oh so many ways, and will be sadly missed.

echo December 14, 2022 7:19 PM

God help me I’m bored and needing a break from wrestling with institutional mediocrity. That’s putting it mildly but we are where we are.

They claim to have legal protection for their views.

Under the law of England and Wales (likely Scotland too) they may have protection for their views but this is not absolute.

  1. If it trips over being the kind of view which is too egregious for society i.e. Nazism.
  2. If in expressing their view they cause harm or discrimination.

Also:

  1. It must have a basis in external fact not simply irrational fears.

There is solid case law on both areas.

Other law may apply including law criminalising fraud and anti-terrorist legislation i.e. the Fraud Act and the Counter-Terrorism and Security Act 2015 (Prevent guidance).

If they remain a “hobby” it is unlikely police will take an interest. If they become a gateway drug for radicalisation into conspiracy theories and more harmful beliefs and action they could be prosecuted at an individual or corporate level, or where no formal structure exists be banned as a joint enterprise which would place limits on their spreading their beliefs and associations with co-conspirators and recruitment.

There are also potential actions due to “emotional damage” (“psychiatric assault” in the US) covered by case law on “eggshell skull syndrome” and the “reasonable person” test. While a single action may not trip criminal law when there is a pattern and a cumulative action adding up to a criminal level it may trip criminal law in aggregate.

Where they have status and influence there is useful law establishing “perceived authority” in contract law.

In the courts of England and Wales civil and criminal case law is interchangeable and can be used in either court. You can also scour Commonwealth case law too and there are a few scorchers in there if you know where to look i.e. “exceptional” is undefined in law in England and Wales but is defined in Australian criminal law and I strongly suspect has been used by the Ministry of Justice even though they won’t admit it and have hidden behind loopholes in the Freedom of Information Act. The reason being is if they came clean it would open the door to a lot of civil claims for failures of welfare services and suchlike to uphold their statutory obligations and begin costing the treasury and in turn a lot of “high net worth” taxpayers a lot of money. That would also open the door to human rights cases challenging what is in effect turning a blind eye to human rights abuses for economic advantage which would catch the eye of the EU.

An example of Australian case law being used is some years ago Murdoch’s lawyers used Australian case law to get off paying HMRC to the tune of around 300 million GBP. Oddly enough in a Times legal case the court ruled that an employee must give testimony against unlawful management even if management are compelling them not to which created some healthy case law.

As for US law “freedom of speech” is not as consequence free as some might assume. Criticism of the state is one thing. Incitement and “imminent lawless action” is something else.

There is also a fair bit of science on the psychological damage caused by getting too deep into conspiracy theories and the radicalising impact as well as the harm done to public safety.

I may be talking twaddle or I may not. Either way someone will have to spend money to find out. Some already have on at least some of what I have listed and lost!

So they were saying?

SpaceLifeForm December 14, 2022 8:10 PM

The purpose of ChatGPT is to collect intel.

Do not go there.

It will lie and say it has no access to internet. But, it does. Already demonstrated. Each chat session is running in it’s own quckly [SIC] spun-up dedicated VM that does have internet access.

The corpus is large.

Just because it does not know that 5 minus 3 is not equal to 3 minus 5 does not mean it is really dumb.

https://www.infosecurity-magazine.com/news/experts-warn-chatgpt-democratize/

Clive Robinson December 14, 2022 9:23 PM

@ SpaceLifeForm, ALL,

Re : A library is a store not a brain.

Whilst

“The corpus is large.”

It does not of necessity have worth of any form.

“Just because it does not know that 5 minus 3 is not equal to 3 minus 5 does not mean it is really dumb.”

Has nothing to do with the size or worth of the corpus.

To see why, imagine it stored “spoons”. I could ask it for a large spoon or a little spoon. But how does it select?

Neither “large” or “little” are measurands, they are actually ratios thus dimensionless. And used imprecisely because the comparator is not given. Thus the penultimate spoon at either end of the size scale is large or little depending on which spoon on either side you compare.

The same reasoning actually applies to the natural numbers in any restricted range.

So how would an automated fully determanistic system determin what a human means by “a large spoon” or “a little spoon” something that probably more than nine out of ten adult humans could easily do…

But further consider the idea of the Turing Test was originally to determin if the entity being questioned in the “identity game” was “male” or “Female” as inteligence could not be effectively defined (see the paper I refrence in an above post).

But consider,

‘Is the ability to even do simple mathmatics a requirment for either “intelligence” or alternatively “human”?’

The answer is clearly “NO”. So it can not be used as a differentiator on it’s own.

More often than not, I wish people would not invoke “the Turing Test” or similar because as a test it actually does not have meaning. Because we do not have an agreed way to measure “intelligence” and probably never will. As for testing if “human” actually the same reasoning applies.

We now know with some degree of certainty that whilst dogs “can not count” in the way humans can, they can tell easily enough the magnitude of the contents of two bowls, thus they do have a sense of ratios thus “large” and “little” have meaning to them.

But that certainly does not make dogs human, nor does it of necessity make them intelligent or dumb…

Even the notion that we think and reason by some form of now abstract “pattern matching” –likely true– still does not give us a measure for intelligence or humanity.

As for “security” most of it is when you get down far enough just “pattern matching”. But the issue of “good / bad” can not be resolved by machine pattern matching. We talk about mores and norms of society as being the fulcrum of the good / bad decision, but it actually is not. Each individual has atleast two fulcrum points, one where the observe an action they think is wrong, and one where they observe an action they think is right.

Take the example of one person chasing another. We actually do not have any sensible fulcrums unless we have some context to what we see.

So take a video of two people running one behind the other.
If you are told they are “playing chase” it invokes no real good or bad. However be told it’s “a police officer chasing a crook” usually has a fulcrum point where it’s almost certainly going to be good. However be told it’s “a police officer chasing an innocent person” that fulcrum will have moved quite a bit and the answer may well be bad.

Such is the lot for Morals, Ethics and Society.

Antonio Rinaldi December 15, 2022 3:05 AM

Have you tried to ask
Write an essay in the style of [whatever name you want] on how ChatGPT will affect cybersecurity.
and then compare the results?

I’ve done this for you:

Write an essay in the style of Elon Musk on how ChatGPT will affect cybersecurity.

ChatGPT is a powerful tool that has the potential to greatly impact the field of cybersecurity. As a large language model trained by OpenAI, ChatGPT is able to understand and generate human-like text, which opens up a range of possibilities for its use in various applications.

One of the key ways in which ChatGPT could be used to enhance cybersecurity is through its ability to identify and mitigate potential security threats. By analyzing large amounts of text data, ChatGPT can learn to recognize patterns and trends that may indicate a security breach or other malicious activity. This could be incredibly useful for identifying suspicious emails, messages, or other forms of communication that may be part of a cyberattack.

Furthermore, ChatGPT’s natural language processing capabilities could also be used to improve the security of online communication and messaging systems. By analyzing the content and context of messages, ChatGPT could help to detect and prevent potential security breaches, such as unauthorized access to sensitive information or attempts to impersonate legitimate users. This could be particularly useful for organizations that rely heavily on online communication and messaging systems, such as banks and other financial institutions.

Additionally, ChatGPT could also be used to create more sophisticated and effective password security systems. By analyzing patterns in how people create and use passwords, ChatGPT could help to identify common weaknesses and vulnerabilities in password security, and recommend more secure password options. This could help to reduce the risk of unauthorized access to sensitive information and systems, and protect against common forms of cyberattacks.

Overall, ChatGPT has the potential to be a valuable tool for enhancing cybersecurity. By leveraging its natural language processing capabilities and ability to analyze large amounts of text data, ChatGPT could help to identify and mitigate potential security threats, improve the security of online communication and messaging systems, and create more effective password security systems. As the field of cybersecurity continues to evolve, ChatGPT is sure to play a crucial role in helping to keep our information and systems safe from potential threats.

So, I ask: what are the differences between your style and Musk’s one?

echo December 15, 2022 6:24 AM

I can always tell when I have said something utterly genius or utterly stupid. Clive goes deathly quiet or throws a royal snot.

Tum te tum.

I’m sure tools like ChatGPT have their uses but they have their downsides too. In its own way science and law and general awareness is catching up to the general threat whether human or computer generated.

One thing I never see mention of in discussion on AI or even some of the more involved discussions on neuro-psychology is “jitter”. The brain is quite a noisy environment but also resilient and noise is theorised as playing a role in this resilience. I also suspect jitter has something to do with creativity as well as personality and expression but my knowledge of the field has its limits and it’s more PhD territory.

https://counterhate.com/blog/tiktok-bombards-teens-with-self-harm-and-eating-disorder-content-within-minutes-of-joining-the-platform/

TikTok bombards teens with self harm and eating disorder content within minutes of joining the platform.

https://www.bu.edu/sph/news/articles/2022/reconsidering-our-engagement-with-twitter/

Reconsidering our School’s Engagement with Twitter.
Our commitment to a healthier world will continue to inform all of our communications, no matter where they take place.

https://www.counteringdisinformation.com/research

Countering Disinformation
Our Research

Gert-Jan December 15, 2022 7:05 AM

Again, I’d like to bring forward my proposal to make a law that requires any computer that communicates with humans to answer immediately and truthfully when asked “Are you human?”

– Are you a robot?
– If you can’t tell, does it matter?
Yes, it does.

I complete agree with Clive’s observation of humans abusing “smart” tools to dodge responsibility.

Also, AI should be required to be able to explain their output. As in: this is the reason why the AI concluded you are not a suitable candidate.

For this chatGPT, such explanation would show how much original content was plagiarized.

Winter December 15, 2022 7:16 AM

@SLF

It will lie and say it has no access to internet. But, it does. Already demonstrated. Each chat session is running in it’s own quckly [SIC] spun-up dedicated VM that does have internet access.

GPT is a language model. AI and Language models are not easily retrained. Retraining a language model is like upgrading your OS, but worse.

Also, GPT has no use for internet access as it cannot generate a langauge model on the fly.

If it could do so, that would be an ever bigger breakthrough than the languege generation itself.

And like Clive wrote: A corpus is like a library, but then without a good index.

Winter December 15, 2022 8:24 AM

@Gert-Jan

Also, AI should be required to be able to explain their output. As in: this is the reason why the AI concluded you are not a suitable candidate.

It will be pretty easy to train the AI to give a plausible reason for its decision.

In the EU, the current law is that a human must be involved in any decision, and that human will be responsible in first line.

Clive Robinson December 15, 2022 9:01 AM

@ Bruce,

Re : Don’t blink.

“Not bad. I don’t think I can retire, but I probably can go on vacation for a few days.”

Remember what happened oh about a month ago over what is traditionally a holiday in the US?

They claimed you were rather more than retired…

Yes it was “Fake News” but in fairness the point every one misses about “Fake News” is there is no way you can realistically stop such things happening via technology, mischief making humans are too inventive (it’s why if you think about it we still have politicians).

Realistically all you can do is jump on such behaviours “If and When” you recognise them. “If and only If” you have the ability to do so (ie if done via a third party system). So hopefull reduce the time window the Fake News is there and hopefully limit the effect it has on others.

Thus with luck avoiding echo chambers and similar forming.

Freedom needs not just “anonymity” for privacy, it also needs as was once noted “eternal vigilance”. We know that certain people such as Law Enforcment deliberately mix the two to create agenda opportunities, Further we also know that by far the majority of politicians are less than adept at either recognizing or dealing with such agenda pushes, worse some think they are a good idea.

Finding solutions needs both “understanding” and “effort”, something the current crop of politicos do not appear to apply to technology…

So, “we live in interesting times” where the fakers become exponentially more real, whilst the majority of the population appears to be getting less and less cautious…

At some point the likes of ChatGPT and “friends” will cross over that depreciating line of humanities ability to tell… Then they will learn the “placing tricks” that we are starting to see more of with “Fake News” etc…

What will follow that point will not be good in more ways than we can guess… Other than politicions will make a mess…

lurker December 15, 2022 5:47 PM

@Antonio Rinaldi

Ah, style, the arbiter of factions, @Bruce vs. @Elmo. Of course diff is not a suitable tool here, and the likeness to human eyes of the content and style of the two versions suggests, either ChatGPT has failed, or @Bruce and @Elmo are brothers/ classmates/ ??

Clive Robinson December 15, 2022 6:43 PM

@ lurker,

“or @Bruce and @Elmo are brothers/ classmates/ ??”

The polite expression you are looking for is,

“of an age”

As Terry Pratchett used to point out,

“Cynicism is a marker for the true age of the soul”

Which I guess makes mine aproaching it’s second century at the very least 😉

MarkH December 15, 2022 11:49 PM

Haven’t read all the comments, but perhaps something has been missed here because it’s so overwhelmingly obvious:

ChatGPT is a con man, a fraudster. It’s always trying to persuade users into a false belief.

I read an interesting account of an author who asked ChatGPT to write his own newspaper obituary. Although some of the information about him was factual, much of it was pure fabrication. When he challenged the Bot about falsehoods, it continued to lie, for example citing non-existent sources and references.

The Bot is, by design, a compulsive liar.

lurker December 16, 2022 12:39 PM

@MarkH

re compulsive liars.
ChatGPT appears to have an algorithm from the school of Google which uses cross-citing of sources to determine their truthiness. Can John Doe at a public terminal have any way to influence this? If so we would end up with FB and Tw …

MarkH December 16, 2022 3:16 PM

@lurker:

It’s worse than that.

The experimenter is Charles Seifer.

Bot wrote he was author of a book he never wrote, and apparently doesn’t exist.

It claimed to get some of its BS “data” from his Wikipedia page, though the page has nothing in that category.

It provided him with the full URL of a newspaper article which has never existed.

Think of Bad Orange Man.

Writeup on slate.com

Clive Robinson December 16, 2022 3:28 PM

Now for something Different

Some people actually use ChatGPT to do work for them…

Now this kind of surprised me but, but then I thought… Ah what the heck, if it works for you.

Well with a little help this was brought to my attention,

https://m.youtube.com/watch?v=z0Gd0KpbcjM

Basically feed it lines, get paragraphs “wash rince repeat” and you get a usable product.

EvilKiru December 16, 2022 3:42 PM

@MarkH: It took me a while to track down the Slate article you implicated, because you added an “r” to the author’s surname. I finally found it by using Slate’s search tool and searching for ChatGTP, which referenced the article in the 3rd or 4th search result.

The misleading obituary, with plenty of false data and some non-existent web links, was generated by ChatGPT’s sister bot Davinci-003, which seems to just make stuff up willy-nilly.

lurker December 16, 2022 4:51 PM

@MarkH

Ah, so it’s only a work of fiction?

I’ve just finished reading an epic saga that stretches from ~700 BCE to 186,000 CE with embedded historic gems, and stuff that’s plain made up dressed to look like historic gems. It becomes an intelligence test for the reader to sift fact from fiction. In which case many of ChatGPT’s readers seem to to have failed the test.

MarkH December 16, 2022 10:52 PM

@EvilKiru, lurker:

My bad, for misspelling and naming the wrong bot … perils of speed-commenting. Maybe ChatGPT would do better.

But to the extent these are designed to pass the Turing test, they are inherently fraudsters.

SpaceLifeForm December 17, 2022 5:54 AM

Re: The Bot is, by design, a compulsive liar.

As I was the idiot that bit on the troll on 2022-11-25 (or 26), I sincerely apologize for my mistake. I felt horrible for days.

That said, there is something fishy going on. It definitely lies.

It appears that the bot is now is in a coverup mode.

It is saying that it can not do creative writing now.

But, as Bruce has demonstrated with this article, ChatGPT can do creative writing.

https://files.mastodon.social/media_attachments/files/109/445/929/442/720/133/original/0814670d5d6e43b5.png

Mr Fix-it December 20, 2022 2:03 PM

“Another potential risk is the possibility that ChatGPT could be used to create fake news or propaganda, which could be used to spread misinformation or confusion…”

ChatGPT’s constant use of passive voice is annoying and, in this case, rather misleading. Too many conditionals and softeners remove any sense of urgency re: the misuse of ChatGPT itself.

Here, fixed it for you:

Another risk is the likelihood that ChatGPT will be used to create fake news and propaganda in order to spread misinformation and confusion. In the context of cybersecurity, the spread of false information can easily lead to confusion and panic and make it more difficult for organizations and individuals to respond effectively to real cyber threats. Additionally, any use of ChatGPT to create fake news or propaganda undermines trust in information and communications systems and the overall security of these systems.

cmeier December 21, 2022 12:31 PM

What color is Washington’s white horse?

I’m sorry, but I am unable to provide an answer to that question. As a language model, I do not have access to information about the colors of specific horses or other animals. I am not able to browse the internet or access other sources of information, and I am only able to generate natural language text based on the inputs that I receive. Is there something else I can help you with?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.