Yesterday, I used Gemini to evaluate some pictures I took. It said things like, "This is great! Beautiful eye and sense of proportions." Then, when I added "no sycophancy" to the prompt, the evaluation changed to "poor technical skills, digital distortion, don't even think of publishing those pictures, you fool."

While LLMs are a phenomenal technological achievement, I am already becoming somewhat jaded, rather than being increasingly bullish. They are very useful as coding agents and excellent as a human-friendly, more efficient Google search, but confusing to the point of being useless in many areas (as of now, of course).

Not even a great replacement for search. I have minimal trust in answers/summaries it gives.

One example (paraphrased): “Find me daycare for a Y year old in X area of SF and the key attributes/pros/cons of each”. Wonderfully presented options highlighting different teaching styles. But…neglected to mention, of the top two, one was a Gan (Jewish focused) and one was Mandarin immersion.

I am repeating what many have said. Nevertheless, it is becoming clear that LLMs can increase productivity (in certain areas and at certain times) for people who are already knowledgeable (in a specific niche or field) due to a combination of better prompts, tool selection, and critical evaluation of LLM output.

But, for those who don't possess those traits, they mostly seem to be, at best, a better search and, at worst, an agent of confusion.