I started listening but got too reminded of the Rush Limbaugh Family Guy episode to continue more than a couple of times. It's all rage. 5 minutes of hate for our generation. Not constructive and not healthy.
I started listening but got too reminded of the Rush Limbaugh Family Guy episode to continue more than a couple of times. It's all rage. 5 minutes of hate for our generation. Not constructive and not healthy.
"our generation" - what? which of his targets do you identify with?
People love being outraged. And no easier target than useless, overpaid, digital analphabet CEOs, AI slop and techbros.
Just for the record, I don’t love being outraged, and yet I frequently employ active effort to maintain my composure. Society is being looted from the top; in both scope and scale, it’s only getting worse. I can appreciate that many find Zitron’s angry energy grating, but I don’t think it’s disingenuous and he’s clearly tapping into a widely shared sense of a degraded society.
Anyone who sincerely believes they’re witnessing 2000’s-levels of greed, economic malfeasance, and oversight failures would be appropriate in behaving frantically. Zitron’s core arguments describe a deeply troubling scenario, and as this article mentions he is not terribly concise—his articles contain a great deal of details and explanations for his positions, and I have largely seen only ad-hoc complaints of his bedside manner, or else vague gestures towards as-yet unrealized prosperity, in response to them.
> I can appreciate that many find Zitron’s angry energy grating, but I don’t think it’s disingenuous
I think it is. If you read this article, for instance:
https://www.wheresyoured.at/longcon/
You will find that he vacillates between “this shit doesn't impress me even a little”; “everything I am describing is unfathomably dangerous”; and “I refuse to sit here and pretend that any of this matters”.
This is not an honest opinion. He considers his enemies both too strong and too weak, depending upon whether he wants to make you feel like they are pathetic or scary. He’s just saying whatever he thinks will make people angry at something he hates.
I feel obliged to say that this isn't true of Ed at all. He does a lot more "real journalism" than most of the journalists I've interacted, and grills the people he's interviewing on technical details more than anyone else I've seen in the field.
I had a stressful five minutes this morning talking about using floating point numbers being weird for some financial applications because he wouldn't accept a shoddy answer.
Objecting to his writing and tone is one thing, but Ed is not dishonest or sloppy.
(And I've seen hospitals do some horrible things that probably demonstrably killed people through funding misallocation and reporting mistakes, which were at least partially caused by the AI bubble. So I think it's fair to say that some AI applications can be unimpressive while the net effect is disastrous.)
> Objecting to his writing and tone is one thing, but Ed is not dishonest or sloppy.
Do you think somebody can honestly hold all of these opinions at the same time?
> this shit doesn't impress me even a little
> everything I am describing is unfathomably dangerous
> expensive, stupid, irksome, quasi-useless new product
> I refuse to sit here and pretend that any of this matters.
So it’s simultaneously unimpressive, unfathomably dangerous, stupid, quasi-useless, and it doesn’t matter?
In context:
> ... this shit doesn't impress me even a little. Wow, you created a superficially-impressive research project that's really long and that cites a bunch of shit it found online that it made little attempt to verify?
> ... everything I am describing is unfathomably dangerous ... expensive, stupid, irksome, quasi-useless new product... Generative AI is a financial, ecological and social time bomb ...
> I'm tired of reading stories about Sam Altman perpetually saying that we're a year away from "everything changing" that exist only to perpetuate the myth that Silicon Valley gives a shit about solving anyone's problems other than finding new growth markets for the tech industry. I refuse to sit here and pretend that any of this matters.
So he's saying it's a bubble, and yes it can be unimpressive (when pompous claims are examined), dangerous and stupid (because it's a bubble), and at the same time can not matter (because its inflated value is based on claiming it does matter).
The research is unimpressive, the bubble is dangerous, the product is stupid, and the tech doesn't matter as much as it is claimed to.
> The research is unimpressive, the bubble is dangerous, the product is stupid, and the tech doesn't matter as much as it is claimed to.
No. Why are you sane-washing him? That’s not what he said.
> everything I am describing is unfathomably dangerous
> I refuse to sit here and pretend that any of this matters.
He is very clearly saying two mutually exclusive things in the same article. He wants you to believe that AI is pathetic and useless, and he wants you to believe it’s unfathomably dangerous.
This is not the writing of somebody who has taken a cold, hard look at the facts and is trying to inform people. This is the writing of somebody who wants to make as many people despise AI as possible and is stringing together as many anti-AI positions as he can even though they cannot form a coherent, rational position together.
The first paragraph reads to me, clearly, as what he is trying to convey, and the latter was simple enough for me to break down into what the previous commenter suggested.
In any case, I just said I know him personally and he does take a cold, hard look at the facts, so that's that!