Just try to challenge and mentor people on not using it because it’s incapable of the job and wasting all our time when the mandate from down high is to use more of it.
Just try to challenge and mentor people on not using it because it’s incapable of the job and wasting all our time when the mandate from down high is to use more of it.
Seems to me like people have to push back more directly with a collective effort; otherwise the incentives are all wrong.
What I don't get, is why people think this action has value. The maintainer of the project could ask an LLM to do that. A senior dev.
I can't imagine Googling for something, seeing someone on (for example) stackoverflow commenting on code, and then filing a bug to the maintainer. And just copy and pasting what someone else said, into the bug report.
All without even comprehending the code, the project, or even running into the issue yourself. Or even running a test case yourself. Or knowing the codebase.
It's just all so absurd.
I remember in Asimov's Empire series of books, at one point a scientist wanted to study something. Instead of going to study whatever it was, say... a bug, the scientist looked at all scientific studies and papers over 10000 years, weighed the arguments, and pronounced what the truth was. All without just, you know, looking and studying the bug. This was touted as an example of the Empire's decay.
I hope we aren't seeing the same thing. I can so easily see kids growing up with AI in their bluetooth ears, or maybe a neuralink, and never having to make a decision -- ever.
I recall how Google became a crutch to me. How before Google I had to do so much more work, just working with software. Using manpages, or looking at the source code, before ease of search was a thing.
Are we going to enter an age where every decision made is coupled with the couching of an AI? This through process scares me. A lot.
I'd say that people take everything as if it was gamified. So the motivation would be just to boast about "raised 1 gazillion security reports in open-source project such as curl, etc. etc.".
AI just make these idiots faster these days, because the only cost for them to is typing "inspect `curl` code base and generate me some security reports".
I remember the Digital Ocean "t-shirt gate" scandal, where people would add punctuation to README files of random repositories to win a free t-shirt.
https://domenic.me/hacktoberfest/
It wasn't fun if you had anything with a few thousand stars on Github.
> I remember in Asimov's Empire series of books, at one point a scientist wanted to study something.
Or "The Machine Stops" (1909):
> Those who still wanted to know what the earth was like had after all only to listen to some gramophone, or to look into some cinematophote.
> And even the lecturers acquiesced when they found that a lecture on the sea was none the less stimulating when compiled out of other lectures that had already been delivered on the same subject. “Beware of first-hand ideas!” exclaimed one of the most advanced of them. “First-hand ideas do not really exist. They are but the physical impressions produced by love and fear, and on this gross foundation who could erect a philosophy? Let your ideas be second-hand, and if possible tenth-hand, for then they will be far removed from that disturbing element — direct observation. [...]"
The person who submitted the report was looking to be a person who found a critical bug, that's it. It's not about understanding/fixing/helping anything, it's about gaining clout.
Exactly, probably so they can get a job, write a blog post, or sell NordVPN on a podcast showing off how amazing and easy this is.
IMO, this sort of thing is downright malicious. It not only takes up time for the real devs to actually figure out if it's a real bug, but it also makes them cynical about incoming bug reports.
> Using manpages, or looking at the source code, before ease of search was a thing.
Yup. Learned sockets programming just from manpages because google didn't exist at that point, and even if it did, I didn't have internet at home.
I have two teenagers. They sometimes have a completely warped view of how hard things are or that other people have probably thought the same things that they’re just now able to think.
(This is completely understandable and “normal” IMO.)
But it leads them to sometimes think that they’ve made a breakthrough and not sharing it would be selfish.
I think people online can see other people filing insightful bug reports, having that activity be viewed positively, misdiagnose the thought they have as being insightful, and file a bug report based on that.
At its core, I think it’s a mild version of narcissism or self-centeredness / lack of perspective.
I read a paper yesterday where someone had used an LLM to read other papers and was claiming that this was doing science.
> I read a paper yesterday where someone had used an LLM to read other papers and was claiming that this was doing science.
I'm not trying to be facetious or eye-poking here, I promise... But I have to ask: What was the result; did the LLM generate useful new knowledge at some quality bar?
At the same time, I do believe something like "Science is more than published papers; it also includes the process behind it, sometimes dryly described as merely 'the scientific method'. People sometimes forget other key ingredients, such as a willingness to doubt even highly-regarded fellow scientists, who might even be giants in their fields. Don't forget how it all starts with a creative spark of sorts, an inductive leap, followed by a commitment to design some workable experiment given the current technological and economic constraints. The ability to find patterns in the noise in some ways is the easiest part."
Still, I believe this claim: there is NO physics-based reason that says AI systems cannot someday cover every aspect of the quote above: doubting, creativity, induction, confidence, design, commitment, follow-through, pattern matching, iteration, and so on. I think question is probably "when", not "if" this will happen, but hopefully before we get there we ask "What happens when we reach AGI? ASI?" and "Do we really want that?".
There's no "physics-based" reason a rat couldn't cover all those aspects. That would truely make Jordan Peterson, the big rat, the worlds greatest visionary. I wouldn't count on it though.
What do you expect? Rich dumbasses like Travis Kalanick go on podcasts and say how they are inventing new physics by harassing ChatGPT.
How are people who don't even know how much they don't know supposed to operate in this hostile an information space?
Now just imagine some malicious party overwhelming software teams with shitloads of AI bug reports like this. I bet this will be weaponized eventually, if not already is.
Bill Joys 'Why the future dosen't need us' feels more and more correct sadly
My sister had a fight over this and resigned from her tenure track position from a liberal arts college in Arkansas.