Following a sweeping executive order aimed to manage the risk of artificial intelligence (AI), the Biden-Harris administration recently announced key AI actions.

In October, President Joe Biden issued an executive order to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition and advance American leadership around the world.

The executive order builds on previous actions, including the voluntary commitments from 15 companies to drive safe, secure and trustworthy development of AI.

The executive order directed the following actions:

  • New standards for AI safety and security
  • Protect Americans’ privacy
  • Advance equity and civil rights
  • Stand up for consumers, patients and students
  • Support workers
  • Promote innovation and competition
  • Advance American leadership abroad
  • Ensure responsible and effective government use of AI

On Jan. 29, the White House announced Deputy Chief of Staff Bruce Reed will convene the White House AI Council, consisting of top officials from a wide range of federal departments and agencies. Agencies reported that they have completed all of the 90-day actions tasked by the E.O. and advanced other vital directives that the Order tasked over a longer timeframe.  

Security leaders weigh in

Nicole Carignan, Vice President of Strategic Cyber AI, at Darktrace

Cybersecurity is a prerequisite for safety, and it is encouraging to see government continuing its efforts to achieving more secure AI. In order to achieve AI that is more privacy-preserving, predictable and reliable, we must continue to facilitate government and industry partnership in AI innovation and security.

AI safety and AI innovation go hand in hand. Historically, security was an afterthought in the development of AI models, leading to a skills gap between security practitioners and AI developers. The initiatives outlined by the Biden Administration, including the National AI Research Resource pilot and the EducateAI initiative will help to train the current and next generation in AI, but we must also focus on security training to ensure AI is adopted safely.

Transparency and integrity of AI is necessary and data scientists play a crucial role in ensuring the responsibility and security of AI tools, so we appreciate the emphasis on the criticality of data scientists to ensure data integrity as well as benchmarks of accuracy, safety and operationalization of the output of AI.

Gal Ringel, Co-Founder and CEO at Mine

President Biden's Executive Order on AI was a meaningful step forward on the matter, especially since comprehensive AI legislation from Congress likely is not on the near horizon (despite Senators going on record in support of the idea of AI legislation). The time since the Executive Order was signed has been quiet, and over the next few months the focus on AI governance from the White House should be on forming transparent working relationships with the tech companies behind the most powerful generative AI models, particularly since the threshold set for AI model capability before one needs to submit to these safety tests and controls is so high.

The EU's AI Act is not expected to formally pass until at least May, so there is no rush to immediately institute risk assessment or data protection requirements on generative AI yet, although that time will come. As we are still in the early days of this technological shift, making sure the government can establish a working rapport with Big Tech on this issue and laying the groundwork for how these safety tests will unfold may not be a glamorous goal for the next few months, but it is a critical one. The government and Big Tech never coaligned on data privacy issues until it was too late and the government's hand was forced by broad public support, so there cannot be a repeat of that failure or the consequences could be immeasurably more damaging when it comes to AI.

Omri Weinberg, Co-founder and CRO at DoControl

It is difficult to see how much self-reporting will protect US interests since those who intend to act outside those interests will choose not to report accurately or at all.

Mandatory reporting from Hyperscalers is potentially more reliable, but it assumes that the Hyperscalers can detect training of potentially powerful or dangerous AI models, which may not be possible thanks to advances in confidential computing and federated learning. There is also an interesting legal question if a US-based Hyperscaler detects such activity in non-US zones. What is their obligation under this regulation versus data privacy and opacity laws of other nations, especially if the model training is happening in those other jurisdictions? 

If the goal is to “do something” to address the perceived threat of AI, then this action is doing something. However, it is far from clear if this regulation will have any positive impact on mitigating risk or deter future hostile cyberattacks.