SBN

Is Artificial Intelligence Making People More Secure? Or Less?

Like anything, AI can be used maliciously. But when used for good, AI can be a game changer.

In May of this year, Members of the European Parliament (MEPs) agreed to adopt a blanket ban on the use of remote biometric identification (facial recognition) in public spaces along with predictive policing tools as part of the EU’s AI Act. The ban is a departure from the original proposal and the position backed in Council by EU member countries.

I believe the ban is a mistake that’s being fueled by fears about AI.

Government and industry must work together to educate the public about the benefits of AI and its ability to detect and fend off threats in cybersecurity and beyond. People’s distrust in AI underscores the need for education, not halting the use of AI. At ForgeRock, we believe that AI is a powerful tool in fraud prevention and the protection of sensitive data. But we also believe that people deserve to know how AI works, and that starts with transparency – made possible through explainable AI.

Explainable AI provides citizens and consumers with the reasons why the AI made a particular choice and enables organizations to understand AI decisioning and make corrections as needed.

AI’s role in cybersecurity

In an increasingly digital world, AI can help companies combat cybercrime. In our industry, identity and access management (IAM), AI is especially valuable, because it can help to prevent identity-based attacks, which are the leading cause of data breaches.

With AI’s ability to analyze large quantities of data and recognize patterns, it can quickly and automatically block known threats and bot activity. AI can be used to monitor user behavior and make real-time decisions about whether to grant access or add step-up authentication if there is anything anomalous about the request (such as a login from an unusual timezone).

This application of AI is known as risk decisioning, and it’s what fuels our ForgeRock Autonomous Access threat protection product. Autonomous Access takes in a range of signals about who is trying to do what, and then determines what they can or cannot do next. If it detects a threat or anomalous behavior, its dashboard explains the decision in human-readable form. Admins get a detailed view of access events along with an explanation of the risky authentication attempt – this makes it easier to explain to legitimate users why they were asked for further proof of their identity. Perhaps they got a new device or were connecting from an unfamiliar network.

Organizations with AI-powered identity and access management can detect unexpected activity, stopping intruders in real time as they try to authenticate. They can also automate the process of eliminating over-provisioned access that enables attackers to use one compromised account to move laterally to higher-value targets.

The benefits of AI in cybersecurity

Artificial intelligence and machine learning (AI/ML) can boost the speed and effectiveness of cybersecurity.

  • Recognizing suspicious activity: AI excels in pattern recognition and can quickly analyze access attempts and user activity to identify suspicious behavior and potentially malicious activity. Plus, it continuously gets smarter at identifying the difference between normal behaviors and emerging threat patterns.
  • Speed and scalability: AI can process and analyze large sets of data far more quickly than humans.
  • Managing digital identity: The use of AI can help with managing digital identities and access permissions more efficiently, restricting access to systems and files based on a user’s role, behavior, or risk score, along with the sensitivity of the data being accessed.
  • Assisting analytics efforts: By analyzing historical data, AI can help to predict future cybersecurity threats. It can also analyze user behavior to identify unusual activity that humans could miss.
  • Improving authentication options: By analyzing contextual factors, such as location, device, and behavior, AI can calculate the risk level of an action or request and issue the appropriate authentication response. Trusted users can sail through authentication with choices like passwordless authentication, while suspicious or high-risk requests are prompted for further authentication or stopped and held for remediation.

Attackers are using AI. We should, too.

The fear of AI is understandable – its future is unknown and its potential for misuse is high. It also raises serious privacy concerns when facial recognition technology is involved, which undoubtedly led to the EU’s recent ban.

But it’s important to remember that AI is already playing an essential role in keeping us safe from sophisticated attacks, including those that are themselves fueled by AI. The advent of generative AI makes it far easier to impersonate others, and there’s been an increase in attackers exploiting AI to develop new malware, create synthetic identities, and generate more targeted and authentic-looking phishing campaigns. Organizations that resist the use of AI in cybersecurity will be hamstrung in their efforts to defend against these attacks.

View our on-demand webinar, “Protect Your Customers Against Identity Fraud,” to learn about the use of AI to defend against AI-powered threats, or visit the Autonomous Access page.

*** This is a Security Bloggers Network syndicated blog from Forgerock Blog authored by Alex Laurien. Read the original post at: https://www.forgerock.com/blog/artificial-intelligence-making-people-more-secure-or-less

Secure Guardrails