It’s as if every researcher in this field is getting high on the small amount of power they have from denying others access to their results. I’ve never been as unimpressed by scientists as I have been in the past five years or so.
“We’ve created something so dangerous that we couldn’t possibly live with the moral burden of knowing that the wrong people (which are never us, of course) might get their hands on it, so with a heavy heart, we decided that we cannot just publish it.”
Meanwhile, anyone can hop on an online journal and for a nominal fee read articles describing how to genetically engineer deadly viruses, how to synthesize poisons, and all kinds of other stuff that is far more dangerous than what these LARPers have cooked up.
> It’s as if every researcher in this field is getting high on the small amount of power they have from denying others access to their results. I’ve never been as unimpressed by scientists as I have been in the past five years or so.
This is absolutely nothing new. With experimental things, it's non uncommon for a lab to develop a new technique and omit slight but important details to give them a competitive advantage. Similarly in the simulation/modelling space it's been common for years for researchers to not publish their research software. There's been a lot of lobbying on that side by groups such as the Software Sustainability Institute and Research Software Engineer organisations like RSE UK and RSE US, but there's a lot of researchers that just think that they shouldn't have to do it, even when publicly funded.
> With experimental things, it's non uncommon for a lab to develop a new technique and omit slight but important details to give them a competitive advantage.
Yes, to give them a competitive advantage. Not to LARP as morality police.
There’s a big difference between the two. I take greed over self-righteousness any day.
I’ve heard people say that they’re not going to release their software because people wouldn’t know how to use it! I’m not sure the motivation really matters more than the end result though.
> “We’ve created something so dangerous that we couldn’t possibly live with the moral burden of knowing that the wrong people (which are never us, of course) might get their hands on it, so with a heavy heart, we decided that we cannot just publish it.”
Or, how about, "If we release this as is, then some people will intentionally mis-use it and create a lot of bad press for us. Then our project will get shut down and we lose our jobs"
Be careful assuming it is a power trip when it might be a fear trip.
I've never been as unimpressed by society as I have been in the last 5 years or so.
When I see individuals acting out of fear, I try not to blame them. Fear triggers deep instinctual responses. For example, to a first approximation, a particular individual operating in full-on fight-or-flight mode does not have free will. There is a spectrum here. Here's a claim, which seems mostly true: the more we can slow down impulsive actions, the more hope we have for cultural progress.
When I think of cultural failings, I try to criticize areas where culture could realistically do better. I think of areas where we (collectively) have the tools and potential to do better. Areas where thoughtful actions by some people turn into a virtuous snowball. We can't wait for a single hero, though it helps to create conditions so that we have more effective leaders.
One massive culture failing I see -- that could be dramatically improved -- is this: being lulled into shallow contentment (i.e. via entertainment, power seeking, or material possessions) at the expense of (i) building deep and meaningful social connections and (ii) using our advantages to give back to people all over the world.
> It’s as if every researcher in this field is getting high on the small amount of power they have from denying others access to their results.
Even if I give the comment a lot of wiggle room (such as changing "every" to "many"), I don't think even a watered-down version of this hypothesis passes Occam's razor. There are more plausible explanations, including (1) genuine concern by the authors; (2) academic pressures and constraints; (c) reputational concerns; (d) self-interest to embargo underlying data so they have time to be the first to write-it-up. To my eye, none of these fit the category of "getting high on power".
Also, patience is warranted. We haven't seen what these researchers are doing to release -- and from what I can tell, they haven't said yet. At the moment I see "Repositories (coming soon)" on their GitHub page.
I think it's more likely they are terrified of someone making a prompt that gets the model to say something racist or problematic (which shouldn't be too hard), and the backlash they could receive as a result of that.
Is it a base model, or did it get some RLHF on top? Releasing a base model is always dangerous.
The French released a preview of an AI meant to support public education, but they released the base model, with unsurprising effects [0]
[0] https://www.leparisien.fr/high-tech/inutile-et-stupide-lia-g...
(no English source, unfortunately, but the title translates as: "“Useless and stupid”: French generative AI Lucie, backed by the government, mocked for its numerous bugs")
Is there anyone with a spine left in science? Or are they all ruled by fear of what might be said if whatever might happen?
Selection effects. If showing that you have a spine means getting growth opportunities denied to you, and not paying lip service to current politics in grant applications means not getting grants, then anyone with a spine would tend to leave the field behind.
maybe they are concerned by the widespread adoption of the attitude you are taking-- make a very strong accusation, then when it was pointed out that the accusation might be off base, continue to attack.
This constant demonization of everyone who disagrees with you, makes me wonder if 28 Days wasn't more true than we thought, we are all turning into rage zombies.
p-e-w, I'm reacting to much more than your comments. Maybe you aren't totally infected yet, who knows. Maybe you heal.
I am reacting to the pandemic, of which you were demonstrating symptoms.
Wow, this is needlessly antagonistic. Given the emergence of online communities that bond on conspiracy theories and racist philosophies in the 20th century, it's not hard to imagine the consequences of widely disseminating an LLM that could be used to propagate and further these discredited (for example, racial) scientific theories for bad ends by uneducated people in these online communities.
We can debate on whether it's good or not, but ultimately they're publishing it and in some very small way responsible for some of its ends. At least that's how I can see their interest in disseminating the use of the LLM through a responsible framework.
thanks. i think this just took on a weird dynamic. we never said we'd lock the model away. not sure how this impression seems to have emerged for some. that aside, it was an announcement of a release, not a release. the main purpose was gathering feedback on our methodology. standard procedure in our domain is to first gather criticism, incorporate it, then publish results. but i understand people just wanted to talk to it. fair enough!
Scientists have always been generally self interested amoral cowards, just like every other person. They aren't a unique or higher form of human.