No? If you ask it to proofread your stuff, any competent model just fixes your grammar without adding anything on its own. At least that's my experience. Simply don't ask for anything that involves major rewrites, and of course verify the result.
The amount of time that I and my colleagues had to fight to not rewrite something instead of fixing it tells otherwise. This is a well documented phenomenon for decades now, so it’s definitely not just my experience. I had the same urge when I started coding, and I had to fight it for a long time in myself.
I have a prompt to make it not rewrite, but just point out "hey you could rephrase this better." I still keep my tone, but the clanker can identify thoughts that are incomplete. Stuff that spell chekcer's can't do.
Yeah. It's "pick your poison". If your English sounds broken, people will think poorly of your text. And if it sounds like LLM speak, they won't like it either. Not much you can do. (In a limited time frame.)
Lately I have more appreciation for broken English and short, to the point sentences than the 20 paragraph AI bullet point lists with 'proper' formatting.
Maybe someone will build an AI model that's succinct and to the point someday. Then I might appreciate the use a little more.
This. AI translations are so accessible now that if you’re going to submit machine-translations, you may as well just write in your native language and let the reader machine translate. That’s at least accurately representing the amount of effort you put in.
I will also take a janky script for a game hand-translated by an ESL indie dev over the ChatGPT House Style 99 times out of 100 if the result is even mostly comprehensible.
It's extraordinarily hit or miss. I've tried giving instructions to be concise, to only give high level answers, to not include breakdowns or examples or step-by-step instructions unless explicitly requested, and yet "What are my options for running a function whenever a variable changes in C#?" invariably results in a bloated list with examples and step-by-step instructions.
The only thing that changed in all of my experimentation with various saved instruction was that sometimes it prepended its bloated examples with "here's a short, concise example:".
LLM are pretty good to fix documents in exactly the way you want. At the very least, you can ask it to fix typos, grammar errors, without changing the tone, structure and content.
> Because it doesn’t just fix your grammar, it makes you sound suspiciously like spam.
This ship sailed a long time ago. We have been exposed to AI-generated text content for a very long time without even realizing it. If you read a little more specialized web news, assume that at least 60% of the content is AI-translated from the original language. Not to mention, it could have been AI-generated in the source language as well. If you read the web in several languages, this becomes shockingly obvious.
It does however work just fine if you ask it for grammar help or whatever, then apply those edits. And for pretty much the rest of the content too: if you have the AI generate feedback, ideas, edits, etc., and then apply them yourself to the text, the result avoids these pitfalls and the author is doing the work that the reader expects and deserves.
It's a tool and it depends on how you use it. If you tell it to fix your grammar with minimal intervention to the actual structure it will do just that.
Paste passages from Wikipedia featured articles, today’s newspapers or published novels and it’ll still suggest style changes. And if you know enough to know to ignore ChatGPTs suggestions, you didn’t need it in the first place.
> And if you know enough to know to ignore ChatGPTs suggestions, you didn’t need it in the first place.
This will invalidate even ispell in vim. The entire point of proofreading is to catch things you didn’t notice. Nobody would say “you don’t need the red squiggles underlining strenght because you already know it is spelled strength.”
No? If you ask it to proofread your stuff, any competent model just fixes your grammar without adding anything on its own. At least that's my experience. Simply don't ask for anything that involves major rewrites, and of course verify the result.
If you can’t communicate effectively in the language how are you evaluating that it doesn’t make you sound like a bot?
Getting your code reviewed doesn't mean you can't code
Verification is easier than generation, especially for natural language.
The amount of time that I and my colleagues had to fight to not rewrite something instead of fixing it tells otherwise. This is a well documented phenomenon for decades now, so it’s definitely not just my experience. I had the same urge when I started coding, and I had to fight it for a long time in myself.
> any competent model just fixes your grammar without adding anything on its own
Grammatical deviations constitute a large part of an author's voice. Removing those deviations is altering that voice.
That's the point. Their voice is unintelligible in English, and they prefer a voice that English-speakers can understand.
I have a prompt to make it not rewrite, but just point out "hey you could rephrase this better." I still keep my tone, but the clanker can identify thoughts that are incomplete. Stuff that spell chekcer's can't do.
Yeah. It's "pick your poison". If your English sounds broken, people will think poorly of your text. And if it sounds like LLM speak, they won't like it either. Not much you can do. (In a limited time frame.)
Lately I have more appreciation for broken English and short, to the point sentences than the 20 paragraph AI bullet point lists with 'proper' formatting.
Maybe someone will build an AI model that's succinct and to the point someday. Then I might appreciate the use a little more.
This. AI translations are so accessible now that if you’re going to submit machine-translations, you may as well just write in your native language and let the reader machine translate. That’s at least accurately representing the amount of effort you put in.
I will also take a janky script for a game hand-translated by an ESL indie dev over the ChatGPT House Style 99 times out of 100 if the result is even mostly comprehensible.
You can ask ai to be succinct and it will be. If you need to you can give examples of how it should respond. It works amazingly well.
It's extraordinarily hit or miss. I've tried giving instructions to be concise, to only give high level answers, to not include breakdowns or examples or step-by-step instructions unless explicitly requested, and yet "What are my options for running a function whenever a variable changes in C#?" invariably results in a bloated list with examples and step-by-step instructions.
The only thing that changed in all of my experimentation with various saved instruction was that sometimes it prepended its bloated examples with "here's a short, concise example:".
I would personally much rather drink the “human who doesn’t speak fluently” poison.
LLM are pretty good to fix documents in exactly the way you want. At the very least, you can ask it to fix typos, grammar errors, without changing the tone, structure and content.
> Because it doesn’t just fix your grammar, it makes you sound suspiciously like spam.
This ship sailed a long time ago. We have been exposed to AI-generated text content for a very long time without even realizing it. If you read a little more specialized web news, assume that at least 60% of the content is AI-translated from the original language. Not to mention, it could have been AI-generated in the source language as well. If you read the web in several languages, this becomes shockingly obvious.
It does however work just fine if you ask it for grammar help or whatever, then apply those edits. And for pretty much the rest of the content too: if you have the AI generate feedback, ideas, edits, etc., and then apply them yourself to the text, the result avoids these pitfalls and the author is doing the work that the reader expects and deserves.
It's a tool and it depends on how you use it. If you tell it to fix your grammar with minimal intervention to the actual structure it will do just that.
Usually
I disagree. You can use it to point out grammar mistakes and then fix them yourself without changing the meaning or tone of the subject.
Paste passages from Wikipedia featured articles, today’s newspapers or published novels and it’ll still suggest style changes. And if you know enough to know to ignore ChatGPTs suggestions, you didn’t need it in the first place.
> And if you know enough to know to ignore ChatGPTs suggestions, you didn’t need it in the first place.
This will invalidate even ispell in vim. The entire point of proofreading is to catch things you didn’t notice. Nobody would say “you don’t need the red squiggles underlining strenght because you already know it is spelled strength.”