The decision Sunday is a major blow to efforts in the United States attempting to rein in the homegrown industry that is rapidly evolving with little oversight
The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons.
How exactly do LLMs do that? If you’ve given an LLM’s pseudorandom output control over your electrical grid, no regulation will mitigate your stupidity.
That, and the Internet has been teaching people how to create bombs since the dial-up days. I don’t predict that LLM’s will be either a benefit or a detriment to that particular strain of natural selection.
If you hook an LLM up as an interface replacement for a manual/analog Power Plant interface and start asking the translator to intuit decisions based on fuzzy inputs, you can create a cascade of errors that result in grid failure.
If you’ve given an LLM’s pseudorandom output control over your electrical grid, no regulation will mitigate your stupidity.
This rule would prevent a business or public regulator from doing such a thing without proving out safeguards.
How exactly do LLMs do that? If you’ve given an LLM’s pseudorandom output control over your electrical grid, no regulation will mitigate your stupidity.
Too many people are confused and think a LLM is an actual AI, and not just a tarted up ELIZA bot from 1968.
Could he understand the halting problem? I doubt he does, but the legislators evidently don’t either
I think it’s more about asking it the steps to create a bomb or how to disrupt the grid, for example, where to cut the major edges.
That sounds like a self-correcting issue right there
That, and the Internet has been teaching people how to create bombs since the dial-up days. I don’t predict that LLM’s will be either a benefit or a detriment to that particular strain of natural selection.
Still a public safety issue.
Is it more of a public safety issue than if they actually build a working one from a legit bomb manual and deploy it?
No, but I think it could make the knowledge more easily available which increases the risk that it may happen.
I see you’ve never heard of the Anarchist’s Cookbook
If you hook an LLM up as an interface replacement for a manual/analog Power Plant interface and start asking the translator to intuit decisions based on fuzzy inputs, you can create a cascade of errors that result in grid failure.
This rule would prevent a business or public regulator from doing such a thing without proving out safeguards.
And the governor vetoed it.