California Legislature Passes AI Regulation Bill, Awaiting Governor’s Approval

California Legislature Passes AI Regulation Bill, Awaiting Governor’s Approval
Published on
2 min read

The California Senate recently passed SB-1047, a bill aimed at preventing potential "critical harms" associated with large AI models before they occur. This legislation, which is now awaiting approval from Governor Gavin Newsom, mandates that companies developing AI models implement stringent safety protocols to prevent catastrophic outcomes. Specifically, the bill addresses the risks posed by AI models that could be used to create weapons capable of causing mass casualties, classifying these scenarios under the category of "critical harms."

The bill applies to the largest AI models that leverage 10^26 FLOPS, (floating point operations which is a way to measure computation) and cost a minimum of $100 million during training. Companies like OpenAI, Microsoft, and Google meet this requirement. As per the bill, the original developer is accountable until the next developer spends $10 million to create its derivative. In addition, it advocates for a safety protocol that includes an “emergency stop” tab to close the entire model at the time of crisis. Developers are required to create testing procedures addressing risks caused by AI models, and mandatorily hire third-party auditors for annual assessment of safety practices.

The newly formed California agency named the Board of Frontier Models for overseeing the rules. A written certification will be mandatory for all public AI models meeting the metrics of its protocol. A total of nine members will be governing The Board of Frontier Models which includes members from the AI industry, open, and source fraternity, and academia elected by California’s legislature and governor. The board will be liable for advising California’s attorney general about the possible breach of SB 1047.

The CTO (chief technology officer) of the developer must report an annual certification revealing the condition of the AI model for potential risks and other compliances. In case of any “AI safety incident,” the developers are directed to send an acknowledgment to FMD under 72 hours of the incident. If the directives are not met, the developers have to stop training the model. In case the company is found indulged in a catastrophic event, the attorney general of California reserves all rights to sue the company. For the cost of the training model worth $100 million, penalties are up to $10 million at first and $30 million with subsequent violations.

𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲

𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 FacebookLinkedInTwitterInstagram

Related Stories

No stories found.
logo
DIGITAL TERMINAL
digitalterminal.in