Understanding SB 1047: Balancing AI Safety and Innovation

Why Focusing on Deployment, Not Release, is Key to Effective AI Regulation

The US is seeing a rise in AI safety legislation due to lobbying from various groups. California’s SB 1047 aims to regulate AI models, but the legislators behind these bills often lack a deep understanding of AI technology, leading to practical issues.

Restrictive AI laws could centralize AI jobs in big tech companies, reducing opportunities for smaller developers and startups.

AI models consist of weights (lists of numbers) and code. Training involves adjusting these weights to improve model performance. SB 1047 mandates shutdown capabilities for training, but training itself isn’t inherently dangerous. Regulating deployment, not training or release, is key to AI safety.

Base models are general-purpose and require extensive training. Fine-tuned models, which are adjusted for specific tasks, can change behaviors quickly and cheaply. SB 1047’s current wording could make open-source development impractical, pushing reliance on commercial models.

Effective AI legislation should focus on deployed systems rather than released models. This approach aligns with laws regulating end-to-end processes and ensures AI safety without stifling innovation.

If SB 1047 restricts model release, open-source AI development could halt, concentrating power on big tech. This move could politicize AI regulation and reduce global competitiveness, especially against countries like China, which lead to open-source AI.