Sam Altman is a businessman. His job is to say whatever assuages his market, and that includes gaslighting you when you're disgusted by AI.
If you never expected Altman to be the figurehead of principled philosophy, none of this should surprise you. Of course the startup alumni guy is going to project maligned expectations in the hopes of being a multi-trillion dollar company. The shareholders love that shit, Altman is applying the same lessons he learned at Worldcoin to a more successful business.
There was never any question why Altman was removed, in my mind. OpenAI outgrew it's need for grifters, but the grifter hadn't yet outgrown his need for OpenAI.
> His job is to say whatever assuages his market
I understand the cynicism but this is in fact not the job of a businessman. We shouldn't perpetuate the pathological meme that it is.
So the job of a businessman is not to increase shareholder value?
Nope. A CEO can't essentially steal from shareholders, but otherwise they have extremely broad latitude in how they engage in business.
There is no legal or moral imperative to make antisocial, unethical, or short term decisions that "maximize shareholder value."
This is something that morally weak people tell themselves (and others) to justify the depravity they're willing to sink to in order to satiate their greed.
The concept doesn't even make sense: different shareholders have different priorities and time horizons. A businessperson has no way to know what it objectively means to maximize their returns. They must make a subjective determination, and they have extremely broad latitude to do that.
If I run an AI business, then people using more AI means more business. If noone uses my AI then I go out of business
Increasing shareholder value can be done in the broadest sense by just increasing business
If I fund my own business, I can control growth and _choose_ ethics over profits, in the hope that stunting growth is acceptable if my customers value ethics too, and that whomever I someday pass my company to shares these values
If I take capital investment, I now have a contractual agreement to provide returns on that investment. Yes failure to adhere can result in lawsuits or legal penalties. Or I can be fired/voted out for failing to bring high enough returns. I now _cannot_ choose ethics over profits, due to the conflict of interest of self-preservation
So you are correct - there is no legal or moral contract to behave unethically, but there is instead a strong systemic and self-preserving incentive to do so
I think we almost agree here, but you make it sound as if the exec can simply stand up and do the right thing here. I argue the exec will simply be pushed aside for another
This is what people refer to when they talk about the binds that hold modern day mega-corps
If you yourself are an exec, I personally think you can understand these truths and work with them as best you can, and still be a good human being of course, but that there are lines that should not be crossed to keep a job
It is a collective issue we need to solve that of course starts with each individual seeing the true situation with kindness and compassion
You’re just saying there are incentives for unethical behavior? Yeah, obviously.
They don’t need to be excused by “well that’s their obligation.” It’s not! Actually, a person’s obligation is to act morally even when there are incentives otherwise, which is approximately all the time for nearly every person.
This is something children learn (lest they be excluded from their society) yet Very Smart People in the upper echelons of the business world conveniently forget.
> If I take capital investment, I now have a contractual agreement to provide returns on that investment. Yes failure to adhere can result in lawsuits or legal penalties.
This is not true. If you've signed a contract that says anything like this, consider getting a real lawyer.
To be clear I'm not disgusted by AI in general, I'm disgusted by short form video and AI/ML in service of dopamine reward loop hacking.