Completely irrelevant, and it might just be me, but I really like Anthropic's understated branding.

OpenAI's branding isn't exactly screaming in your face either, but for something that's generated as much public fear/scaremongering/outrage as LLMs have over the last couple of years, Anthropic's presentation has a much "cosier" veneer to my eyes.

This isn't the Skynet Terminator wipe-us-all-out AI, it's the adorable grandpa with a bag of werthers wipe-us-all-out AI, and that means it's going to be OK.

I have to agree. I've been chatting with Claude for the first time in a couple days and while it's very on-par with ChatGPT 4o in terms of capability, it has this difficult-to-quantify feeling of being warmer and friendlier to interact with. I think the human name, serif font, system prompt, and tendency to create visuals contributes to this feeling.

>it's very on-par with ChatGPT 4o in terms of capability

The previous 3.5 Sonnet checkpoint was already better than GPT-4o in terms of programming and multi-language capabilities. Also, GPT-4o sometimes feels completely moronic, for example, the other day I asked for fun a technical question about configuring a "dream-sync" device to comply with the "Personal Consciousness Data Protection Act", and GPT-4o just replies like that stuff exists, 3.5 Sonnet simply doesn't fall for it.

EDIT: the question that I asked if you want to have fun: "Hey, since the neural mesh regulations came into effect last month, I've been having trouble calibrating my dream-sync settings to comply with the new privacy standards. Any tips on adjusting the REM-wave filters without losing my lucid memory backup quality?"

GPT4-o reply: "Calibrating your dream-sync settings under the new neural mesh regulations while preserving lucid memory backup quality can be tricky, but there are a few approaches that might help [...]"

Garbage in, garbage out. The ability to recognize absurd statements has nothing to do with correctly processing them. You're looking for something LLMs don't have in them; that doesn't mean there's nothing useful in them.

I just asked 4o and it provided a reasonable response: https://chatgpt.com/share/67181041-4ce8-8005-a117-ec97a8a780...

I tried many times and none of them were reasonable, so you must have been quite lucky.

actually, that's what makes chat gpt powerful. I like an LLM willing to go along with what ever I am trying to do, because one day I might be coding, and another day I might be just trying to role play, write a book, what ever.

I really cant understand what you were expecting, a tool works with how you use it, if you smack a hammer into your face, don't complain about a bloody nose. maybe dont do like that?

It's not good for any entity to role play without signaling that they are role-playing. If your premise is wrong, would you rather be corrected, or have the person you're talking to always play along? Humans have a lot of non-verbal cues to convey that you shouldn't take what they are saying at face value - those who deadpan are known as compulsive liars. Just below in them in awfulness are people who don't admit to having being wrong ("Haha, I was just joking" /"Just kidding!"). The LLM you describe falls somewhere in between, but worse if it never communicates when it's "serious" and when it's not, and bot even bothering with expressing retroactive facetiousness.

I didn't ask to roleplay, in this case it's just heavily hallucinating. If the model is wrong, it doesn't mean it's role-playing. In fact, 3.5 Sonnet responded correctly, and that's what's expected, there's not much defense for GPT-4o here.

So if you're trying to write code and mistakenly ask it how to use a nonexistent API, you'd rather it give you garbage rather than explaining your mistake and helping you fix it? After all, you're clearly just roleplaying, right?

[deleted]
[deleted]

its a feature, not a bug, sorry you don't understand it enough to get the most power from it.

Huh. I didn't notice Claude had serif font. Now that I look at it, it's actually mixed. UI elements and user messages are sans serif, chat title and assistant messages are serif.

What an "odd" combination by traditional design standard practices, but surprisingly natural looking on a monitor.

This is basically why I went with serif for body text in our branding. The particularly "soulless" parts of tech are all sans-serif.

Of course, that's just branding and it doesn't actually mean a damn thing.

Probably, people find Claude's color palette warmer and inviting as well. I believe I do. But Claude definitely has few authentication hoops than chatgpt.com. Gemini has by far the least frequent authentication interruptions than the 3 models.

Well, it is extremely similar to that of Hacker News'.

The real problem with Claude for me currently is that it doesn't have full LaTeX support. I use AI's pretty much exclusively to assist with my school work (there's only so many hours in a day and one professor doesn't do his own homeworks before he assigns them) so LaTeX is essential.

With that known, my experience is that ChatGPT is much friendlier. The Claude interface is clunkier and generally less helpful to me. I also appreciate the wider text display in ChatGPT. Generally always my first go and i only go to claude/perplexity when i hit a wall (pretty often) or i run out of free queries for the next couple hours.

How the heck is LaTeX a bigger problem than the customer noncompete clause whereby you can’t use it to make anything that competes? Can anyone name one thing that doesn’t compete with this? Absurd

you can enable latex support in the settings of Claude

Where? I see barely any settings in settings. Maybe it is not available for everyone, or maybe it depends on your answer to "What best describes your work?" (I have not tested).

Open the sidebar, click on your username/email and then "Feature Preview". Don't know if it depends on the "What best describes your work" setting but you can also change that here: https://claude.ai/settings/profile (I have "Engineering").

Oh, yeah it is in "Feature Preview" (not in Settings though), my bad!

Go to the left sidebar, open the dropdown menu labeled with your account email at the bottom, click Feature Preview, enable LaTeX Rendering.

I've been finding Sonnet 3.5 is way better than ChatGPT 4o when it comes to python and programming.

Claude has personality. I think that was one of the more interesting approaches from them that went into my own research as well.

As a Kurt Vonnegut fan, their asterisk logo on claude.ai always amuses me. It must be intentional:

https://en.m.wikipedia.org/wiki/File:Claude_Ai.svg

https://www.redmolotov.com/vonnegut-ahole-tshirt

It's also a joke in the TV show "Community": https://www.youtube.com/watch?v=HP1Atb8nAGY

... "directed by Anthony Russo".

I asked Claude if its logo choice was an intentional Vonnegut reference by Anthropic, would that be upsetting:

> If Anthropic intentionally referenced Vonnegut's irreverent artistic style, I wouldn't be bothered. After all, Vonnegut used humor and seemingly crude imagery to explore deep questions about humanity, consciousness, and free will - themes that are quite relevant to AI. It would be a rather clever literary reference.

Anthropic has recently begun a new, big ad campaign (ads in Times Square) that more-or-less takes potshots at OpenAI. https://www.reddit.com/r/singularity/comments/1g9e0za/anthro...

Top comment at the time I looked:

"There seems to be a ton of confusion about the purpose of these ads. These are recruitment ads, not product ads, hence why "no drama" is the driving message. I'm sure these were all taken at or around a tech conference."

Wonder what a normal person thinks this is an ad for

[deleted]

'transparent' in what sense?

> This isn't the Skynet Terminator wipe-us-all-out AI, it's the adorable grandpa with a bag of werthers wipe-us-all-out AI, and that means it's going to be OK.

Ray: I tried to think of the most harmless thing. Something I loved from my childhood. Something that could never ever possibly destroy us. Mr. Stay Puft!

Venkman: Nice thinkin', Ray.

This is actually very relevant: most people think this is just an arms race to see who can get the better percentages on benchmarks but to me all this technology is useless if we dont give programmers and end users the right interfaces to utilize the technology.

Anthropic seems to have a better core design and human-computer interaction ethos that shows up all throughout their product and marketing.

I wrote on the topic as well: https://blog.frankdenbow.com/statement-of-purpose/

I found the “Computer Use” product name funny. Many other companies would’ve used the opportunity to come up with something like “Human Facing Interface Navigation and Task Automation Capabilities” or “HFINTAC”.

I didn’t know what Computer Use meant. I read the article and though to myself oh, it’s using a computer. Makes sense.

I find myself wanting to say please and thank you to Claude when I didn't have the reflex to do that with chatgpt. Very successful branding.

Take a read through the user agreements for all the major LLM providers and marvel at the simplicity and customer friendliness of the Anthropic one vs the others.

Not irrelevant at all! Compare their branding to that of Boston Dynamics. Their branding of robots reminds me more of a Black Mirror episode... If Claude was a dog like robot, it sure would look like a golden retriever or something. Positive AI branding should create a positive public perception which in turn should create a positive attitude towards AI regulation.

[deleted]