> Using the word “Mentoring” is anthropomorphic and subconsciously makes you think it will learn.

I think this is a bit pedantic. Obviously the parent you’re replying to is referring to the concept of “in-context learning”, which is the actual industry / academic term for this. So you feed it a paper, and then it can use that info, and it needs steering / “mentoring” to be guided into the right direction.

Heck the whole name of “machine learning” suggests these things can actually learn. “reasoning” suggests that these things can reason, instead of being fancy, directed autocomplete. Etc.

In other news: data hydration doesn’t actually make your data wet. People use / misuse words all the time, and that causes their meaning to evolve.

Anthropomorphism is a subtle marketing tool used by these big AI companies, who are financially incentivized to push the myth of AGI and want everyone to believe they're right on the cusp of achieving it. It's good to be pedantic in this case, we shouldn't anthropomorphize these tools.

This is just a “hurr durr AI companies evil” argument without substance.

It’s the people that are the problem, nobody told the grandparent to use “mentoring” as a word, and my argument is that it’s a complete overreaction to classify them as anthropomorphizing AIs, and I’d argue default to that argument would be an insult to them, and it’s super pedantic.

But in-context learning is like a student only remembering what they’re being taught for the duration of the discussion. That’s not really how mentoring is meant to work, so pointing out the issues with the metaphor seems pretty reasonable.

In other news: That words can change meaning doesn’t mean that every possible change in meaning would be beneficial to communication and therefore desirable. Would you advocate in support of someone suggesting to use “left” to mean “right” simply on the basis words can change in meaning?

I agree it’s pedantic and personally don’t get bent out of shape with people anthropomorphizing the llms. But I do think you get better results if keep the text prediction machine mental model in your head as you work with them.

And that can be very hard to do given the ui we most interact with them in is a chat session.

Absolutely, but there is no evidence that the grandparent was doing that, all they did was use the word “mentoring” and my argument is not that anthropomorphizing isn’t a problem - it is - but that the response to this particular HN is super pedantic.

Obviously the real people that are classifying AI as human intelligence aren’t going to be the top comment on reviewing LLM’s PhD-level papers. They are on very different, much more problematic areas of the internet.