President Biden's plan to regulate AI


President Joe Biden, sitting at a desk, hands his pen to Vice President Kamala Harris, standing, while a presentation on AI is projected on a screen next to them.
President Biden hands his pen to Vice President Harris after signing the Artificial Intelligence Safety, Security, and Trust executive order on October 30, 2023. | Chip Somodevilla/Getty Images

Now comes the hard part: Congress.

Since the widespread release of generative AI systems like ChatGPT, there’s been an increasingly loud call to regulate them, given how powerful, transformative, and potentially dangerous the technology can be. President Joe Biden’s long-promised Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is an attempt to do just that, through the lens of the administration’s stated goals and within the limits of the executive branch’s power. The order, which the president signed on Monday, builds on previous administration efforts to ensure that powerful AI systems are safe and being used responsibly.

“This landmark executive order is a testament of what we stand for: safety, security, trust, openness, American leadership, and the undeniable rights endowed by a creator that no creation can take away,” Biden said in a short speech before signing the order.

The lengthy order is an ambitious attempt to accommodate the hopes and fears of everyone from tech CEOs to civil rights advocates, while spelling out how Biden’s vision for AI works with his vision for everything else. It also shows the limits of the executive branch’s power. While the order has more teeth to it than the voluntary commitments Biden has secured from some of the biggest AI companies, many of its provisions don’t (and can’t) have the force of law behind them, and their effectiveness will largely depend on how the agencies named within the order carry them out. They may also depend on if those agencies’ abilities to make such regulations are challenged in court.

Broadly summarized, the order directs various federal agencies and departments that oversee everything from housing to health to national security to create standards and regulations for the use or oversight of AI. These include guidance on the responsible use of AI in areas like criminal justice, education, health care, housing, and labor, with a focus on protecting Americans’ civil rights and liberties. The agencies and departments will also develop guidelines that AI developers must adhere to as they build and deploy this technology, and dictate how the government uses AI. There will be new reporting and testing requirements for the AI companies behind the largest and most powerful models. The responsible use (and creation) of safer AI systems is encouraged as much as possible.

The Biden administration made sure to frame the order as a way to balance AI’s potential risks with its rewards: “It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks,” White House deputy chief of staff Bruce Reed said in a statement.

What the order does…

The order invokes the Defense Production Act to require companies to notify the federal government when training an AI model that poses a serious risk to national security or public health and safety. They must also share results of their risk assessment, or red team, testing with the government. The Department of Commerce will determine the technical thresholds that models must meet for the rule to apply to them, likely limiting it to the models with the most computing power.

The National Institute of Standards and Technology will also set red team testing standards that these companies must follow, and the Departments of Energy and Homeland Security will evaluate various risks that could be posed by those models, including the threat that they could be employed to help make biological or nuclear weapons. The DHS will also establish an AI Safety and Security Board comprised of experts from the private and public sector, which will advise the government on the use of AI in “critical infrastructure.” Notably, these rules largely apply to systems that are developed going forward — not what’s already out there.

Fears that AI could be used to create chemical, biological, radioactive, or nuclear (CBRN) weapons are addressed in a few ways. The DHS will evaluate the potential for AI to be used to produce CBRN threats (as well as its potential to counter them), and the DOD will produce a study that looks at AI biosecurity risks and comes up with recommendations to mitigate them.

Of particular concern here is the production of synthetic nucleic acids — genetic material — using AI. In synthetic biology, researchers and companies can order synthetic nucleic acids from commercial providers, which they can then use to genetically engineer products. The fear is that an AI model could be deployed to plot out, say, the genetic makeup of a dangerous virus, which could be synthesized using commercial genetic material in a lab.

The Office of Science and Technology Policy will work with various departments to create a framework for screening monitoring synthetic nucleic acid procurement, the DHS will ensure it’s being adhered to, and the Commerce Department will also create rules and best practices for screening synthetic nucleic acid sequence providers to ensure that they’re following that framework. Research projects that include synthetic nucleic acids must ensure that providers adhere to the framework before they can receive funding from federal agencies.

The order has provisions for preserving Americans’ privacy, although it acknowledges that the ability to do so is limited without a federal data privacy law and calls on Congress to pass one. Good luck with that; while Congress has put forward various data privacy bills over the years and the need for such regulations seems more than clear by now, it has yet to get close to passing any of them.

Another concern about AI is its ability to produce deepfakes: text, images, and sounds that are impossible to tell apart from those created by humans. Biden noted in his speech that he’s been fooled by deepfakes of himself. The EO calls for the Department of Commerce to create and issue guidance on best practices to detect AI-generated content. But that call is a far cry from having the technology to actually do so, something that has eluded even the leading companies in the space.

...and why it’s not enough

Even before the order, Biden had taken various actions related to AI, like the White House’s Blueprint for an AI Bill of Rights and securing voluntary safety commitments from tech companies that develop or use AI. While the new Biden EO is being hailed as the “first action of its kind” in US government history, the Trump administration issued an AI EO of its own back in 2019, which laid out the government’s investment in and standards for the use of AI. But that, of course, predated the widespread release of powerful generative AI models that has brought increased attention to — and concern about — the use of AI.

That said, the order is not meant to be the only action the government takes. The legislative branch has work to do, too. Senate Majority Leader Chuck Schumer, whom Biden singled out for praise during the order signing, attempted to take the reins in April with the release of a framework for AI legislation; he’s also organized closed meetings with tech CEOs to give them a private forum for input on how they should be regulated. The Senate Judiciary subcommittee on privacy, technology, and the law put forward a bipartisan framework in September.

Rep Don Beyer (D-VA), vice chair of the House’s AI Caucus, said in a statement that the order was a “comprehensive strategy for responsible innovation,” but that it was now “necessary for Congress to step up and legislate strong standards for equity, bias, risk management, and consumer protection.”

While the Biden administration repeatedly claimed that this is the most any government has done to ensure AI safety, several countries have also taken action, most notably in the European Union. The EU’s AI Act has been in the works since 2021, though it had to be revised to incorporate generative AI and the US reportedly isn’t thrilled with it. China created rules for the use of generative AI last summer. The G7 is currently figuring out a framework for AI rules and laws, and just announced that they’ve reached an agreement on guiding principles and a voluntary code of conduct. Vice President Kamala Harris will be in England this week for an international summit on regulating the technology.

As for whether the order managed to be all things to all people, the general response seems to be cautious optimism, with the recognition that the order has limits and is only a start. Microsoft president Brad Smith called it “another critical step forward,” while the digital rights advocacy group Fight for the Future said in a statement that it was a “positive step,” but that it was waiting to see if and how agencies carried the mandates out.

“We face a genuine inflection point,” Biden said in his speech, “one of those moments where the decisions we make in the very near term are going to set the course for the next decades … There’s no greater change that I can think of in my life than AI presents.”

----------------------------------------

By: Sara Morrison
Title: President Biden’s new plan to regulate AI
Sourced From: www.vox.com/technology/2023/10/31/23939157/biden-ai-executive-order
Published Date: Tue, 31 Oct 2023 18:45:45 +0000

Read More