Security, privacy, and generative AI

The integration of large language models into many third-party products and applications present many unknown security and privacy risks. Here’s how to address them.

bucket with holes security vulnerabilities breach insecure
Thinkstock

Since the proliferation of large language models (LLMs), like OpenAI’s GPT-4, Meta’s Llama 2, and Google’s PaLM 2, we have seen an explosion of generative AI applications in almost every industry, cybersecurity included. However, for a majority of LLM applications, privacy and data residency is a major concern that limits the applicability of these technologies. In the worst cases, employees at organizations are unknowingly sending personally identifiable information (PII) to services like ChatGPT, outside of their organization’s controls, without understanding the associated security risks.

In a similar vein, not all base models are created equally. The output of these models might not always be factual, and the variability of their outputs are dependent on a wide variety of technical factors. How can consumers of LLMs validate that a vendor is using the most appropriate models for the desired use case, while respecting privacy, data residency, and security?

This article will address these considerations and will aim to give organizations a better ability to evaluate how they use and manage LLM models over time.

Proprietary vs. open-source LLMs

To begin the discussion, it’s important to provide some technical background in the implementation and operation of LLM services. In the broadest sense, there are two classes of LLMs—proprietary and open-source models. Examples of proprietary LLMs are OpenAI’s GPT-3.5 and GPT-4, and Google’s PaLM 2 (the model behind Bard), where access is hidden behind internet-facing APIs or chat applications.

The second class is open-source models, like those hosted on the popular public model repository Hugging Face or models like Llama 2. It should be noted that any commercial services using open-source LLMs should be running some variant of Llama 2, as it is currently the state-of-the-art open-source model for many commercial applications.

The main advantage of open-source models is the ability to locally host them on organization-owned infrastructure, either using on-premises, dedicated hardware or in privately managed cloud environments. This gives owners complete control over how the model is used and can ensure that data remains within the domain and the control of the organization. While these open-source models may currently have sub-par performance compared to the current, state-of-the-art GPT-4 and PaLM 2 models, that gap is quickly closing.

Although there is significant hype around these technologies, they can introduce several security concerns that can be easily overlooked. Currently, there are no strong regulatory or compliance standards on which to govern or audit these technologies that are specific to AI. There are currently many legislative acts in the works, such as the Artificial Intelligence and Data Acts (AIDA) in Canada, the EU AI Act, the Blueprint for the AI Bill of Rights in the US, and other niche standards being developed through NIST, the SEC, and the FTC. However, notwithstanding these initial guidelines, very little regulatory enforcement or oversight exists today.

Developers are therefore responsible for following existing best practices around their machine learning deployments, and users should perform adequate due diligence on their AI supply chain. With these three aspects in mind—propietary vs. open-source models, performance/accuracy considerations, and lack of regulatory oversight—there are two main questions that must be asked of vendors that are leveraging LLM in their products: What is the base model being used, and where is it being hosted?

Safeguarding security and privacy of LLMs

Let’s tackle the first question first. For any modern organization, the answer will typically be GPT-3.5 or GPT-4 if they are using proprietary models. If a vendor is using open-source models, you can expect it to be some variant of Llama 2. 

If a vendor is using the GPT-3.5 or GPT-4 model, then several data privacy and residency concerns should be addressed. For example, if they are using the OpenAI API, you can expect that any entered data is being sent to OpenAI systems. If PII is being shared outside of the company domains, this will likely violate many data governance, risk, and compliance (GRC) policies, making the use of the OpenAI API unacceptable for many use cases.

However, due to the many concerns shared by developers earlier this year, OpenAI modified their existing privacy policy stating that business data sent via ChatGPT Enterprise or the API will not be used in the training of their models. As such, organizations who engage with genAI solutions that use the OpenAI API should perform adequate third-party risk assessments, with consideration given to the sensitivity of the data and nature of the use case. Similarly, if your generative AI vendor or application uses the Azure OpenAI service, then data is not shared or saved by OpenAI.

Note that there are several technologies that can scrub LLM prompts of PII prior to being sent to proprietary endpoints to mitigate the risk of PII leakage. However, PII scrubbing is difficult to generalize and validate with 100% certainty. As such, open-source models that are locally hosted provide much greater protection against GRC violations compared to proprietary models.

However, organizations deploying open-source models must ensure stringent security controls are in place to protect the data and models from threat actors (e.g., encryption on API calls, data residency controls, role-based access controls on data sets, etc.). However, if privacy is not a concern, usage of proprietary models is typically preferred due to cost, latency, and fidelity of their responses.

To expand the level of insight that exists within the AI deployment, you can use an LLM gateway. This is an API proxy that allows the user organization to carry out real-time logging and validation of requests sent to LLMs as well as tracking any data that is shared and returned to individual users. The LLM gateway provides a point of control that can add further assurances against such PII violations by monitoring requests, and in many cases, remediating security issues associated with LLMs. This is a developing area, but it will be necessary if we want to put together AI systems that are ‘secure by design’.

Ensuring the accuracy and consistency of LLMs

Now, onto model performance, or accuracy. LLMs are trained on enormous amounts of data scraped from the internet. Such data sets include CommonCrawl, WebText, C4, CoDEx, and BookCorpus, just to name a few. This underlying data comprises the world the LLM will understand. Thus, if the model is trained only on a very specific kind of data, its view will be very narrow, and it will experience difficulty answering questions outside of its domain. The result will be a system that is more prone to AI hallucinations that deliver nonsensical or outright false responses.

For many of the proposed applications in which LLMs should excel, delivering false responses can have serious consequences. Luckily, many of the mainstream LLMs have been trained on numerous sources of data. This allows these models to speak on a diverse set of topics with some fidelity. However, there is typically insufficient knowledge around specialized domains in which data is relatively sparse, such as deep technical topics in medicine, academia, or cybersecurity. As such, these large base models are typically further refined via a process called fine-tuning.

Fine-tuning allows these models to achieve better alignment with the desired domain. Fine-tuning has become such a pivotal advantage that even OpenAI recently released support for this capability to compete with open-source models. With these considerations in mind, consumers of LLM products who want the best possible outputs, with minimal errors, must understand the data in which the LLM is trained (or fine-tuned) to ensure optimal usage and applicability.

For example, cybersecurity is an underrepresented domain in the underlying data used to train these base models. That in turn biases these models to generate more fictious or false responses when discussing cyber data and cybersecurity. Although the portion of cybersecurity topics within the training data of these LLMs, is hard to discern, it is safe to say that it is minimal compared to more mainstream topics. For instance, GPT-3 was trained on 45 TB of data; compare this to the 2 GB cyber-focused data set used to fine-tune the model CySecBert. While general-purpose LLMs can provide more natural language fluency and the ability to respond realistically to users, the specialist data used in fine-tuning is where the most value can be generated.

While fine-tuning LLMs is becoming more common place, gathering the appropriate data on which to fine-tune base models can be challenging. This typically requires the vendor to have a relatively mature data engineering infrastructure and to collect the relevant attributes in non-structured formats. As such, understanding how a vendor implements the fine-tuning process, and the data on which a model is trained, is pivotal in understanding its relative performance, and ultimately, how much the application can deliver trustworthy results. For companies interested in developing AI products or using a service from another provider, understanding where that data came from and how it was used as part of fine-tuning will be a new market differentiator.

As we look at the security, privacy, and performance issues that come with LLM usage, we must be able to manage and track how users will interact with these systems. If we do not consider this right from the start, then we will run the risk that previous generations of IT professionals faced with shadow IT usage and insecure default deployments. We have a chance to build security and privacy into how generative AI is delivered right from the start, and we should not miss out on this opportunity.

Jeff Schwartzentruber is senior machine learning scientist at eSentire.

Generative AI Insights provides a venue for technology leaders to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.

Copyright © 2023 IDG Communications, Inc.