Mon | Jan 29, 2024 | 4:27 PM PST

The Biden Administration recently unveiled new rules requiring developers of major artificial intelligence (AI) systems to disclose vital information, especially related to safety testing, to the U.S. Department of Commerce. The mandate under the Defense Production Act is part of sweeping measures to manage the risks of AI laid out in a recent Executive Order.

While tech policy experts see the transparency requirements as a positive step, cybersecurity professionals have mixed opinions on whether the self-reporting rules will effectively mitigate dangers from irresponsible AI development.

"It is difficult to see how much self-reporting will protect U.S. interests since those who intend to act outside those interests will choose not to report accurately or at all," said Omri Weinberg, Co-founder and CRO at DoControl.

However, Nicole Carignan, Vice President of Strategic Cyber AI at Darktrace, said that government and industry partnerships are key for achieving more secure, ethical AI. "AI safety and AI innovation go hand in hand," she said. The administration's increased investments in AI training and education can help address skills gaps that have led to AI security issues in the past.

The Executive Order also directs agencies to complete risk assessments for AI across all critical infrastructure sectors, begin new AI research resource pilots, accelerate government hiring of AI talent, and establish a healthcare AI task force, among other measures.

"The time since the Executive Order was signed has been quiet, and over the next few months the focus on AI governance from the White House should be on forming transparent working relationships with the tech companies behind the most powerful generative AI models," said Gal Ringel, CEO of data privacy firm Mine.

While calling the order a "meaningful first step," Ringel argued that comprehensive legislation around responsible AI development is still needed, though likely not imminent.

The Biden Administration has framed its approach as balancing AI innovation with managing emerging risks. But the new rules have limitations, Weinberg argued, as they "assume that the Hyperscalers can detect training of potentially powerful or dangerous AI models, which may not be possible."

With Congress unlikely to pass legislation anytime soon, the onus will be on the White House and federal agencies to ensure these reporting requirements have teeth. Their success in bringing Big Tech to the table may prove decisive in determining if the U.S. can lead the way on AI accountability.

Follow SecureWorld News for more stories related to cybersecurity and AI.

Comments