One year later: President Biden’s AI executive order

October 16, 2024

Practitioners and scholars take stock of the impact of the executive action and consider what’s next for this transformative technology

Key Takeaways


President Joe Biden signed a sweeping executive order that aimed to promote “safe, secure, and trustworthy development and use of artificial intelligence” in October 2023. The executive order directed the Department of Commerce to develop guidelines governing the use of AI, with a particular focus on “dual-use foundation models,” which are trained on broad data and can be used across a wide range of contexts.

Now, a year later, AI scholars and practitioners are assessing the executive order’s effectiveness and considering what’s next for the transformative technology.

“The executive order focuses on two things very early on. It says we must focus on harnessing AI as well as protecting against the perils. And there’s a lot, text wise, devoted to the latter point, and I would like to have seen more devoted to the former point of harnessing AI, especially in federal agencies,” Dean Alderucci, a senior congressional fellow with the House Committee on Science, Space, and Technology, said during a recent event held at the Johns Hopkins University Bloomberg Center.

Balancing Precaution with Innovation

Much of the conversation at the AI Ethics and Governance Symposium focused on the perceived dichotomy between fostering innovation and ensuring safety in AI development. Alderucci argued that while it is crucial to mitigate risks, overly cautious approaches could stifle innovation and hinder societal benefits. He wants to see more devotion and plans for how federal agencies in particular can harness the power of AI.

“It would be really simple to destroy all potential for any risk whatsoever. Don’t use AI. The reason we have these conversations about risks is because we want to use AI, and for good reason,” Alderucci said. “We quite literally could transform our society, could transform people’s lives for the better.”

Ami Fields-Meyer, a senior fellow of the Harvard Kennedy School and former senior policy advisor to Vice President Kamala Harris, argued the public has cause to be concerned about the explosion of generative AI.

“The American people have really good reasons at this point to be skeptical of technology and to be skeptical of new technology that has promises associated with it,” Fields-Meyer said.

Evolving Regulation

For Alderucci, it’s critical that any regulatory efforts reflect the rapid evolution of AI. He emphasized the need to remain open to reevaluating our approaches and to become comfortable with the notion that we don’t know where AI will be in a year. 

“You don’t take your playbook and say, ‘It’s wonderful, we thought about it for so long. We are going to put it under museum glass, and then we’ll just build upon that like scaffolding,’” Alberucci says. “You really need to reconsider a lot of things.”