These announcements always make it clear how little companies releasing new AI models actually care about the risks and concerns of developing artificial intelligence.
CEOs love to talk about how important regulation is, how their company needs to develop it before the "wrong people" do, and how they are concerned about what could happen if AI development goes wrong.
Then they announce the latest model that is aimed at expanding both the accuracy and breadth of use cases across multiple modalities. Sure the release links to a security and ethics page, but that page reads more like a company's internal "pillars of success" document with vague phrases that define little to nothing in the way of real, specific concerns or measures to mitigate them. It basically boils down to "Don't be evil" with no clear definition of what that would mean or how they prevent the new, more powerful and broad reaching system from being used in ways that are "evil".
CEOs are the "wrong people". Leaders of obscenely large organizations, unchecked by law nor ethics, wielding and gatekeeping what amounts to superpowers. They are the literal supervillains, not some shadowy "terrorists" or another made up bogeyman.
I'd argue we could get a long way by removing legal protections and incentives for such large organizations. Those protections and incentives seem to me to be the root cause behind such large centralizations of power.
If companies and their leadership couldn't operate so unchecked by our existing laws and public opinion we may not have executives worth worrying about.
For example, if taxes were so easy to dodge and if the public actually had a chance to sue large corporations for damages they may not get so large. If, when losing a lawsuit, companies couldn't shuffle around funds and spin off dummy companies to dodge the pain, and if they weren't often forced to pay pennies on the dollar for lost suits, they may think twice about doing some things. When you know your entire business is actually on the line you have to be more careful.
Throw in election and lobbying reform and we could at least be having a much different conversation about corporate power.
It has been lobotomized enough already. Look at C++20 Concepts example https://news.ycombinator.com/item?id=39395020
Limiting features in the public API aren't quite the same as limiting the technical readability of developing an artificial intelligence or consciousness.
Limiting public features will help a bit with concerns over how someone might use a public GPT API, but the technology advancements will be made either way and ultimately companies won't be able to gate keep who can use it with 100% accuracy. The boom for GPU hardware similarly is pushing us further down the road to AI development and all the moral and ethical questions that go along with it, even if AI companies were to keep use of their GPUs and models private entirely.
I don't see any other way, personally.
We can argue that a lot of people have done pretty bad things using the internet, but should it have been regulated in advance?
Regulation doesn't have to be the answer though. The very same people talking out of both sides of their mouths here are the ones who can choose to just not invest in it.
Lock up the hardware in an offline facility and experiment there, if they really think it's important. Hell, even just skipping the double speak would be a big step. If they really aren't concerned with the risk then own it, don't tell me its risky while also releasing a new, more powerful version every 6-12 months.
[dead]