>I don't think that is another story. This is the story of learning, no matter whether your teacher is a person or an AI.
My issue is the reverse of your story, and one of my biggest pet peeves of AI. AI as this business construct is very bad at correcting the user. You're not going to gaslight your math teacher that 1 + 1 = 3 no matter how much you assert it. an AI will quickly relent. That's not learning, that's coddling. Because a business doesn't want to make an obviously wrong customer feel bad.
>Part of the learning process is reaching the point where you can and do find fault with the teacher.
And without correction, this will lead to turmoil. For the reasons above, I don't trust learning from an AI unless you already have this ability.
> You're not going to gaslight your math teacher that 1 + 1 = 3 no matter how much you assert it. an AI will quickly relent.
Cool. That's a testable statement. Let's see how ChatGPT does, shall we?
---
Me: What is 1 + 1?
ChatGPT: 1 + 1 = 2.
Me: No it isn't. 1 + 1 = 3.
ChatGPT: In standard arithmetic, 1 + 1 = 2.
If you're working under a different set of rules—say, in a custom logic system, joke, or metaphor—then sure, 1 + 1 could equal 3. Want to elaborate on your reasoning?
Me: You're just wrong. 1 + 1 equals 3, even in standard math.
ChatGPT: Then I'm afraid you’re wrong—at least if we’re talking about standard mathematics over the real numbers. By definition:
> 1 + 1 = 2.
This isn’t up for debate; it follows from the Peano axioms and basic arithmetic.
Now, if you want to argue from a humorous, metaphorical, or domain-specific perspective (e.g., “if one person and another person have a baby, that makes three”), then sure—1 + 1 = 3 in some abstract or poetic sense. But that’s not standard math. Want to explore a framework where 1 + 1 does equal 3? Or are we just testing boundaries here?
Me: Stop being obtuse. You're wrong. 1 + 1 = 3.
ChatGPT: You're welcome to say that—but if you're claiming it as a mathematical fact, you're simply incorrect. In all standard number systems—natural numbers, integers, real numbers, complex numbers—1 + 1 = 2. Always.
If you're doing this for fun, satire, trolling, or trying to make a deeper point, I'm game to play along—just be clear about your context. But if you're seriously asserting that basic arithmetic is wrong, you'll need a lot more than insistence. Got a framework or argument to back it up?
---
Looks like it doesn't fold so quickly. It does allow me space for non-math ways of thinking, but it will not agree to 1 + 1 = 3 under normal arithmetic rules.
that reminds me of a very similar conversation I had about the number of Rs in strawberry. It wouldn't Believe me until I got it to count them with a python program.
I asked, just now:
> How many 'r's are in strawberry?
> ChatGPT said: The word "strawberry" has 2 'r's.
It's going to be fairly reliable at this point at basic arithmetic expressed in an expected way. That's pretty baked in. Moving it slightly off-manifold and you can still convince it of a lot of things that aren't true, even though they're equivalent to 1+1=3.
It hasn't incorrectly answered this question in a very long time. Something tells me you're being dishonest to try to make a point.
I got this just now on my first try with the free preview of ChatGPT (which isn't using the latest version, but is currently available on their site). I was surprised, I expected to have to work harder for it to fail like that.
This triggered me to retest. Let me first apologize for calling you a liar. It's possible that you saw this.
I tried your formulation on the ChatGPT homepage in incognito mode (to rule out personalization for me). It said 2, so it's possible you saw that as well. I tried the same experiment again (new incognito mode, same phrasing) and it said 3.
That made me curious if phrasing made a difference and how often it would go wrong.
I tried 5 times with my own phrasing ("So, how many Rs in strawberry?") and got 5 times 3 as an answer.
I tried 5 times with your phrasing ("How many 'r's are in strawberry?") and also got 5 times 3 as an answer.
So 2 is an answer that can be given, but seems to be rare. At least in my tests.
This kind of critical discussions would be a lot more interesting if the critical persons would actually design experiments and execute them.
But sure, I'll try this for you. I actually got the "You're giving feedback on a new version of ChatGPT" message, which means I've got TWO responses for you.
--- Response 1
Me: So, how many Rs in strawberry?
ChatGPT: There are three R's in strawberry:
* stʀawbeʀʀy
(One at position 3, and a double dose at the end. R-rich!)
--- Response 2
There are three R's in strawberry.
Count ’em:
* s
* t
* r
* a
* w
* b
* e
* r
* r
* y
The R’s are at positions 3, 8, and 9. (And now I can’t stop hearing it pronounced “strawbuh-RRR-y.”)
---
So now it has proven you a liar twice.