Regulation supports the big players. See SB 1047 in California and read the first few lines: > comply with various requirements, including implementing the capability to promptly enact a full shutdown, as defined, and implement a written and separate safety and security protocol, as specified

That absolutely kills open source, and it's disguised as a "safety" bill where safety means absolutely nothing (how are you "shutting down" an LLM?). There's a reason Anthropic was championing it even though it evidently regulates AI.

>That absolutely kills open source

Zvi says this claim is false: https://thezvi.substack.com/p/guide-to-sb-1047?open=false#%C...

>how are you "shutting down" an LLM?

Pull the plug on the server? Seems like it's just about having a protocol in place to make that easy in case of an emergency. Doesn't seem that onerous.

To be fair, I don't really agree with the concept of "safety" in AI in the whole Terminator-esque thing that is propagated by seemingly a lot of people. Safety is always in usage, and the cat's already out of the bag. I just don't know what harm they're trying to prevent anyways at all.

Which server? The one you have no idea about because you released your weights and anyone can download/use them at that point?