If a URL or domain flagged by ZeroFox is validated as malicious, Google will provide a warning message to users across its 5 billion devices in a matter of minutes, advising them not to access the domain in question. Credit: CHUYN / Getty Images / AKO9 Cybersecurity provider ZeroFox has announced a partnered capability with Google Cloud to warn users of malicious URLs and fake websites in a bid to disrupt phishing campaigns.As part of the partnership, ZeroFox will automatically detect phishing domains for customers and submit verified, malicious URLs through Google Cloud’s Web Risk Submission API, disrupting attacks and warning users of malicious content on billions of devices using browser warnings. This is expected to help both ZeroFox customers as well as Google Cloud users.“If a URL or domain flagged by ZeroFox is validated as malicious, Google will provide a warning message to users across its 5 billion devices in a matter of minutes, advising them not to access the domain in question,” said James Foster, founder, and CEO of ZeroFox. AI engine used to take down malicious domainsZeroFox provides a SaaS-based offering that uses global intelligence collection and AI analysis across a broad set of data sources to deliver continuous domain monitoring to accurately detect instances of account takeovers, website spoofs, and impersonations. It also features a domain takedown service built on an AI analysis engine that automatically detects malicious domains including typosquatting, homoglyphs — common spelling-based, domain-jacking methods — and other early indicators of phishing sites. Post detection, ZeroFox works with its “global disruption network” consisting of domain hosts, registrars, and other partners to have these malicious sites taken down or blocked.“The ZeroFox external cybersecurity platform collects intelligence across the internet, looking for indicators of threats targeting our customers, including malicious domains, social media impersonations, data breaches, and more,” Foster said. “We leverage AI analysis and detection capabilities in order to provide internet speed and scale of the collection as well as detection of otherwise hidden threats, such as object detection in images and logo infringement.” ZeroFox uses AI mainly in the processing and analysis phases of its backend pipeline. During the processing stage, AI technologies such as computer vision and natural language processing are applied to all content. At the analysis stage, more specific AI techniques are used depending on the use case.This results in highly accurate alerts being generated and sent to customers through the platform’s service delivery model, with 100% (all true positives) precision, Foster said.To ensure that relevant and actionable alerts are delivered quickly, ZeroFox employs a combination of AI and human intelligence in its service delivery model. This approach is consistent with other cybersecurity monitoring, alerting, and response systems. While protection against external attacks is a crucial add-on to an organization’s security regime, only a few security products cater to this segment. Most solutions, however, have some form of machine learning and behavior analysis component in place to detect and protect against malicious activities.“The most popular approach is for security companies to OEM this service from OpenText/Webroot, through BrightCloud reputation service, which is the recognized market leader for this segment,” said Dave Gruber, principal analyst at ESG. “Some other security companies maintain their own databases of malicious URLs, embedding similar services within their offerings through a Gateway or API-based add-on security offering.” Related content news Bug in EmbedAI can allow poisoned data to sneak into your LLMs The vulnerability can be used to deceive a user into inadvertently uploading and integrating incorrect data into the application’s language model. By Shweta Sharma May 31, 2024 3 mins Generative AI Vulnerabilities news OpenAI accuses Russia, China, Iran, and Israel of misusing its GenAI tools for covert Ops OpenAI’s generative AI tools were used to create and post propaganda content on various geo-political and socio-economic issues across social media platforms, the company said. By Gyana Swain May 31, 2024 4 mins Generative AI news Okta alerts customers against new credential-stuffing attacks Hackers are using credential-stuffing to attack endpoints that are used to support the cross-origin authentication feature. By Shweta Sharma May 31, 2024 4 mins Identity and Access Management Vulnerabilities feature 3 reasons users can’t stop making security mistakes — unless you address them Understanding what’s behind employee security mistakes can help CISOs make meaningful adjustments to their security awareness training strategies. By Ariella Brown May 31, 2024 5 mins Data Breach Risk Management PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe