What are we supposed to talk about in this thread exactly? The developers of this model are evil. Are we supposed to just write dry comments about benchmarks while OpenAI condones their models being deployed for autonomously killing people?
Yes I'm sure it makes a very nice bicycle SVG. I will be sure to ask the OpenAI killbots for a copy when they arrive at my house.
If it is actually that important, then maybe more effort should be made so it isn't "low quality." Cannot be very important to them if they're disinterested in presenting an intellectually compelling argument about it.
PS - If you think I am not sympathetic to what they're raising, you're very much mistake. But they're not winning anyone new over their side with this flamebait.
You are applying a problem which every AI company has, not unique to OpenAI. What about other nation-states making auto-AI robots which kill children, will you still choose to pick out OpenAI specifically? Maybe your concern is too late and dozens of countries already are training their own AIs to do that or worse.
first real comment, I thought that at first but this could lower the possible users that could be using chatGPT and that would be against us (shareholders)
"that lead to a chat bot being used to identify a school for girls as a valid target"
Has it been stated authoritatively somewhere that this was an AI-driven mistake?
There are myrid ways that mistake could have been made that don't require AI. These kinds of mistakes were certainly made by all kinds of combatants in the pre-AI era.
Targeting and accuracy mistakes happen plenty in wars that aren't assisted by AI. I don't think it's fair to assume that AI had a hand in the bombing of the school without evidence.
What attitude exactly are you talking about? The one that says that if you’re going to morally sell out it would be better if you at least tried not to kill children?
This is the low quality reddit-style garbage that gets upvoted on HN these days?
What are we supposed to talk about in this thread exactly? The developers of this model are evil. Are we supposed to just write dry comments about benchmarks while OpenAI condones their models being deployed for autonomously killing people?
Yes I'm sure it makes a very nice bicycle SVG. I will be sure to ask the OpenAI killbots for a copy when they arrive at my house.
While low quality, it is extremely important, potentially historically significant too.
If it is actually that important, then maybe more effort should be made so it isn't "low quality." Cannot be very important to them if they're disinterested in presenting an intellectually compelling argument about it.
PS - If you think I am not sympathetic to what they're raising, you're very much mistake. But they're not winning anyone new over their side with this flamebait.
You can say your piece about how you don't like OpenAI working with the US military on lethal AI without making Reddit style quips.
The HN of old is no more unfortunately. Things get up or down voted based purely on political alignment.
As programmers become intelligently irrelevant in the whole picture, you would see more posts like this
"This account belongs to a lazy person" true
I was just reading the model card...
True and simply vote it down.
mycall would also be to do the same
Noticeably yes much more than usual. It’s quite bad. I need to start blocking accounts.
[flagged]
You are applying a problem which every AI company has, not unique to OpenAI. What about other nation-states making auto-AI robots which kill children, will you still choose to pick out OpenAI specifically? Maybe your concern is too late and dozens of countries already are training their own AIs to do that or worse.
This company sucks, what about all the other ones that suck hmmmmmm?
All of these VC funded AI companies are bad. Full stop. Nothing good for humanity will come of this.
You underestimate my capacity for broad hatred
Absolutely amazing. Grateful to be living in this timeframe
What makes you think that they see bombing civilians as a bug, not a feature?
first real comment, I thought that at first but this could lower the possible users that could be using chatGPT and that would be against us (shareholders)
what a thoughtful comment! HN is so low quality these days
Evidence
Don't use the site this way.
https://news.ycombinator.com/newsguidelines.html
You made a burner account just to scold this guy? Don’t use burner accounts this way.
I think for your comment to follow the guidelines, you need to explain why the original comment did not follow them.
Customer values are relevant to the discussion given that they impact choice and therefore competition.
Not all rule-following is noble or wise.
AINT NO PARTY LIKE A GARRY TAN HOT TUB PARTY
news guidelines
Parlay?
Ironically this would actually be a good thing. As we can see from Iran Claude doesn’t quite have these bugs ironed out yet…
This is the exact attitude that lead to a chat bot being used to identify a school for girls as a valid target.
The chatbot cannot be held responsible.
Whoever is using chatbots for selecting targets is incompetent and should likely face war crime charges.
"that lead to a chat bot being used to identify a school for girls as a valid target"
Has it been stated authoritatively somewhere that this was an AI-driven mistake?
There are myrid ways that mistake could have been made that don't require AI. These kinds of mistakes were certainly made by all kinds of combatants in the pre-AI era.
Do you think anyone is ever going to say this under any circumstances? That Anthropic were right and they were proved right the very next day?
Yeah yeah, they probably had a human in the loop, that’s not really the point though.
Targeting and accuracy mistakes happen plenty in wars that aren't assisted by AI. I don't think it's fair to assume that AI had a hand in the bombing of the school without evidence.
What attitude exactly are you talking about? The one that says that if you’re going to morally sell out it would be better if you at least tried not to kill children?