4 Cybersecurity Risks Related to ChatGPT and AI-powered Chatbots

Tech companies should be cautious with ChatGPT and other AI chatbot tools—and evaluate possible cyber risks, says CompTIA CEO Todd Thibodeaux.

Four Cybersecurity Risks to Know for ChatGPTThere’s a lot of hype around ChatGPT and other artificial intelligence (AI) tools now, and the space is transforming quickly. As tech companies look to augment their current and future processes and products, they should proceed with caution and evaluate potential risks associated with the technology and how it is being used in the market.

As far as cybersecurity threats go, the current versions present increased risks to be aware of, including:

  • Enhanced phishing content that sounds like something a native English speaker would say (as opposed to another language poorly translated into English)
  • Insufficient safeguards, enabling less experienced bad actors to use malicious code that they may not otherwise have the expertise to write or find on their own
  • Potential to hack or hijack an AI chatbot tool in order to direct people to malicious sites
  • Unknown or missing parameters to determine how conversational AI tools can be used in a licensed form

Related Blog: How to Think About ChatGPT and the New Wave of AI

Smarter Defenses, Transparency Will Be Critical

Knowing that human error still accounts for most successful cyberattacks, enhanced phishing practices shouldn’t be taken lightly. We’ll need smarter defenses on the inbound side to detect and quarantine these more sophisticated messages. End users already struggle to identify phishing attempts, and now they could appear even more legitimate.

If malicious code becomes easier to obtain, it will lead to an increased volume of issues. Drawing in even 1% more wannabe criminals to execute hacks and phishing attempts at the same rate increases organizations’ and end users’ risk. The potential to redirect people from legitimate chatbot sites/tools to malicious sites is where things get particularly interesting. In the early stages, as the industry sorts out the guardrails, it will take time for the AI to recognize the difference between good and bad actors. The algorithms might err on the side of inclusion, which could lead to some bad stuff being ingested and recommended.

As popular search engines explore and integrate conversational capabilities into their results, what are those recommendations going to be based on? In a keyword-driven SEO world, it’s pretty clear. Relevant meta-tagging, cross-linking and search rankings provide a pretty decisive score.

But we’re not sure how that’s going to pan out in a ChatGPT-like world. Transparency around how this is going to work is very important for people to trust the process. When answers are presented without context, people are more likely to trust them because they come across as definitive. You ask a question and get an answer, but you don’t question whether it’s the right answer.

When it comes to parameters and licensing, most of today’s solutions run on the same servers from the same code base. What happens when they start to get calls to run in independent environments? What will the service-level agreements (SLAs) look like governing their use? How do they keep the functionality from being acquired by the wrong sort of actors? There are so many unknowns, it’s hard to know where to begin.

Follow the Innovation, Ask the Right Questions

I’m still in a wait-and-see mindset as it relates to ChatGPT and cybersecurity. We’ve heard grumblings that OpenAI is working on a new version that is 15 times more powerful than what has been released to-date.

I wasn’t surprised to hear that, but it raises some questions:

  • Can it process more algorithms more quickly?
  • Are the responses 15 times more relevant?

I find that possibility hard to believe. These processes eat huge amounts of data. Several years ago, a CompTIA board member noted that the industry hadn’t begun to capitalize on the sale of pure processing power in the cloud. He was very prescient.

Most cloud apps today eat hard disk space but have very meager CPU intensity in the same sense as something like ChatGPT. Now just think of hundreds of these engines running simultaneously, consuming trillions of CPU cycles every minute!

Whose IP is it Anyway?

In addition, there’s a non-cyber factor to follow. How will the developers abide by copyrights and trademarks? Search engines don’t pretend that the content they serve up is their own.

The way ChatGPT is displaying, at least for now, it looks like the AI is magically producing all the answers and crafting the content itself. Of course, the truth is that it has ingested and summarized the internet (at least through the end of 2021) to produce an answer to the request.

What it knows about writing a sonnet, it has learned from what real people have done. I can see a system of micropayments to content creators and website owners every time their works are accessed or used in a meaningful way. If developers don’t solve for this infringement issue early, they could be hit with lawsuit after lawsuit.

For all these reasons (and more), it’s a space to follow very closely. At the very least, IT companies should familiarize themselves with the technology, because customers are going to be asking questions, and you’re going to want to have some answers.

Get more tech insights like this right in your inbox with CompTIA’s IT Career Newsletter. Subscribe today, and you can save 10% off your next CompTIA purchase.

Todd Thibodeaux is CEO of CompTIA.

Newsletter Sign Up

Get CompTIA news and updates in your inbox.

Subscribe

Read More from the CompTIA Blog

Leave a Comment