SAFE Security’s Cyber Risk Cloud of Clouds generates likelihoods for different risk scenarios based on an organization’s cybersecurity posture. Credit: geralt AI-based cyber risk management SaaS vendor SAFE Security has announced the release Cyber Risk Cloud of Cloud – a new offering it claims uses generative AI to help businesses predict and prevent cyber breaches. It does so by answering questions about a customer’s cybersecurity posture and generating likelihoods for different risk scenarios. These include the likelihood of a business suffering a ransomware attack in the next 12 months and the dollar impact of an attack, the firm added. This enables organizations to make informed, prognostic security decisions to reduce risk, SAFE Security said.The emergence of generative AI chat interfaces that use large language models (LLMs) and their impact on cybersecurity is a significant area of discussion. Concerns about the risks these new technologies could introduce range from the potential issues of sharing sensitive business information with advanced self-learning algorithms to malicious actors using them to significantly enhance attacks. Some countries, US states, and enterprises are considering or have ordered bans on the use of generative AI technology such as ChatGPT on data security, protection, and privacy grounds.However, generative AI chatbots can also enhance cybersecurity for businesses in multiple ways, giving security teams a much-needed boost in the fight against cybercriminal activity. SafeGPT provides “comprehensible overview” of cybersecurity postureSAFE’s generative AI chat interface SafeGPT, powered by LLM models, provides stakeholders with a clear and comprehensible overview of an organization’s cybersecurity posture, the firm said in a press release. Through its dashboard and natural language processing capabilities, SafeGPT enables users to ask targeted questions of their cyber risk data, determine the most effective strategies for mitigating risk, and respond to inquiries from regulators and other key stakeholders, it added. According to SAFE, the types of questions the service can answer include: How likely are you to be hit by a ransomware attack in the next 12 months?What is your likelihood of being hit by the latest malware like “Snake”?What is your dollar impact for that attack?What prioritized actions can you proactively take to reduce the ransomware breach likelihood and reduce dollar risk?Cyber Risk Cloud of Clouds brings together disparate cyber signals including those from CrowdStrike, AWS, Azure, Google Cloud Provider, and Rapid7 into a single view, the firm said. This provides organizations with visibility across their attack surface ecosystem, including technology, people, and third parties, it added.CSO asked SAFE Security for further information about the type of data SafeGPT uses to answer questions about a customer’s cybersecurity posture/risk incident likelihood, as well as how the company ensures the security of data inputted and answers outputted by SafeGPT. Questions, answers do not leave SAFE’s datacenter, train modelsSAFE uses customers’ own risk data augmented with external threat intelligence to generate a real-time, comprehensive cybersecurity posture, Saket Modi, CEO of SAFE, tells CSO. “SAFE has deployed the Azure OpenAI service in its own data center so that the customer data does not leave it. Azure has several security measures in place to ensure the security of the data and they do not use any customer data to train their models,” Modi adds.For a question like “What is the likelihood of Snake Malware” in an environment, for example, SafeGPT queries the local customer’s data loaded in Azure OpenAI and provides the answer, says Modi. “It does not expose the question or the answer outside the SAFE datacenter. SAFE’s product development goes through extensive security testing throughout its development process.”LLM “hallucinations” a chief concern of generative AIAI/machine learning has been in use for the purpose of predicting security exploits/breaches for at least a decade. What’s new is the use of generative AI with a chat interface for SOC analysts to quiz the backend LLM on the likelihood of an attack, Rik Turner, a senior principal analyst for cybersecurity at Omdia, tells CSO.“The questions they ask will need to be honed to perfection for them to get the best, and ideally the most precise, answers. LLMs are notorious for making things up, or to use the term of art, ‘hallucinating,’ such that there is a need for anchoring (aka creating guardrails, or maybe laying down ground rules) to avoid such outcomes,” he says.For Turner, a main concern with the use of generative AI as an operational support for SOC analysts is that, while it may well help Tier-1 analysts to work on Tier-2 problems, what happens if the LLM hallucinates? “If it comes back talking rubbish and the analyst can easily identify it as such, he or she can slap it down and help train the algorithm further. But what if the hallucination is highly plausible and looks like the real thing? In other words, could the LLM in fact lend extra credence to a false positive, with potentially dire consequences if the T1 analyst goes ahead and takes down a system or blocks a high-net-worth customer from their account for several hours?” Related content interview Strong CIO-CISO relations fuel success at Ally CIO Sathish Muthukrishnan and CISO Donna Hart have forged a partnership steeped in Ally’s culture of radical candor that keeps the financial services firm secure and innovative. By Dan Roberts May 09, 2024 9 mins CIO CSO and CISO IT Leadership news Zscaler shuts down exposed system after rumors of a cyberattack Initially dismissing rumors, Zscaler now says it did have a system exposed but nothing important has been accessed. By Shweta Sharma May 09, 2024 3 mins Data Breach Cyberattacks news Palo Alto launches AI-powered solutions to fight AI-generated cyberthreats The suite is powered by Palo Alto’s proprietary solution, Precision AI, which integrates machine learning, deep learning, and generative AI technologies. By Prasanth Aby Thomas May 09, 2024 3 mins Generative AI Security Software news F5 patches BIG-IP Next Central Manager flaws that could lead to device takeover Two high-risk vulnerabilities could allow attackers to gain full administrative control on devices via leaked password hashes. By Lucian Constantin May 08, 2024 5 mins Threat and Vulnerability Management Cloud Security Vulnerabilities PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe