New Paper: “Securing AI: Similar or Different?“

Anton Chuvakin
Anton on Security
Published in
3 min readSep 13, 2023

--

As you may have noticed, we have released a new paper on securing AI. I want to share a few additional things here on top our official launch blog.

src: http://bit.ly/ociso-ai1-pod

For a few years (so, yes, I did start before the ChatGPT launch, if you have to ask…), I’ve been a little obsessed about the differences between securing AI systems and securing any other complex enterprise data-intensive systems (please see this blog and podcasts that are mentioned there). This applies to securing the infrastructure, models, capabilities and other things related to AI, not just running AI workloads.

Notice, for example, that I was asking every guest on these podcast episodes about their view of this topic. By the way, it became very clear to me that most sane people today are more concerned about data theft rather than about the freakin’ robot rebellion.

So now we had a chance to create a paper that diligently goes through security domains and then notes down which of the domains apply without major changes to the task of securing AI and which domains change a lot.

src: outline for http://bit.ly/ociso-ai1-pod

We also interviewed our CISO and of course I’ve asked the same question. Notice where Phil says what his highlights are:

So on one level, it’s like software security. So you have a whole bunch of elements of your AI system, that you have to manage their provenance, you have to manage their secure build, you have to manage the lifecycle, you have to manage the testing and the regression testing. But another level, it’s like data security and data governance, because you have to manage the training data, the test data, or the weights and the parameters, you have to show that there is provenance of that data, you’ve got to worry about the intellectual property, you’ve got to worry about all sorts of aspects of the data that is then used by the software.

To implement the AI system, you’ve got to think about input and output management in terms of the prompts and the output. And then like we talked about before, you’ve got to think about the guardrails and the circuit breakers and the API gateways, and all of the other things that surround the AI system, the constraints on how it was used, and how it can be prompted to not reveal sensitive information and to not produce any outputs that you don’t expect.”

All this means that unlike in other security domains, AI security does raise some broader issues than, say, securing a CRM or ERP system. After all, there’s no movie where a Siebel server gets sent forward from the 1980s and then takes over the world of the 2030s… but I digress.

However, many otherwise intelligent AI security conversations devolve into discussions about the fate of humanity and such. We wanted to separate these, ahem, longer term concerns into something that affects large organizations implementing AI technologies today. Hence our paper!

Please also look at our previous work on our SAIF framework. At this time, we are also working on another fun paper on how to apply this at your organization for your specific use cases of AI …

For now, enjoy “Securing AI: Similar or Different?“ [PDF]!

Related blogs and podcast:

--

--