bt_bb_section_bottom_section_coverage_image

OpenAI exec says California’s AI safety bill might slow progress

In a new letter, OpenAI chief strategy officer Jason Kwon insists that AI regulations should be left to the federal government. As reported previously by Bloomberg, Kwon says that a new AI safety bill under consideration in California could slow progress and cause companies to leave the state.

The letter is addressed to California State Senator Scott Wiener, who originally introduced SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

According to proponents like Wiener, it establishes standards ahead of the development of more powerful AI models, requires precautions like pre-deployment safety testing and other safeguards, adds whistleblower protections for employees of AI labs, gives California’s Attorney General power to take legal action if AI models cause harm, and calls for establishing a “public cloud computer cluster” called CalCompute, OpenAI exec .

In a response to the letter published Wednesday evening, Wiener points out that the proposed requirements apply to any company doing business in California, whether they are headquartered in the state or not, so the argument “makes no sense.” He also writes that OpenAI “…doesn’t criticize a single provision of the bill” and closes by saying, “SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk.”

Following concerns from politicians like Zoe Lofgren and Nancy Pelosi, companies like Anthropic, and organizations such as California’s Chamber of Commerce, the bill passed out of committee with a number of amendments that included tweaks like replacing criminal penalties for perjury with civil penalties and narrowing pre-harm enforcement abilities for the Attorney General.

The bill is awaiting a final vote to go to Governor Gavin Newsom’s desk.

Share
× WhatsApp