From the Github repo (https://github.com/ExxistanceDC/Segagaga-English-Translation), the translation went through a process called MTPE (Machine translation, post-editing). This works just like it sounds. The initial translation is done with machine translation, then human translators review and edit the resulting translation to try to correct any mistakes.

> What I call the “playtesting translation” — a base translation that allowed the artists and playtesters to get started early and understand what they were working on — was developed using a combination of DeepL and ChatGPT 4o/4.5. That translation then went through a substantial, months-long human translator review. I don't think that the end product feels “machine-translated,” but that’s ultimately for you, the player, to judge.

> MTPE (Machine translation, post-editing). This works just like it sounds. The initial translation is done with machine translation, then human translators review and edit the resulting translation to try to correct any mistakes.

And the consensus among professional translators is that MTPE only saves time if you're willing to accept a half-assed result. For them to edit MT up to the standard of manual translation takes just as much expertise and effort as translating it manually in the first place.

> And the consensus among professional translators is that MTPE only saves time if you're willing to accept a half-assed result.

I have no particular interest in translation, but clearly when the person saying X is bad depends financially on you not buying X, you must take their word with a grain of salt.

[deleted]
[deleted]

> consensus among professional translators

[citation needed]

https://blog.gts-translation.com/2025/04/07/the-state-of-mac...

> 12.08% [of translators] say MTPE produces high-quality output.

> A significant portion (around 50%) of respondents do not offer discounts for MTPE work, arguing that post-editing can take as much time as traditional translation.

> Among those who do offer discounts, the most common range is between 10-30%.

Oh boo hoo.

Just boooooo.

Disappointed. Gif

As a side bar, I found it funny how in a Stanford lecture teaching various routes to training llms on the Machine Translation benchmark, the sample used was a French to English translation of 'the teddy bear is blue' or something similar.

After the lecture I reviewed the current production grade google translate...it butchered the translation

I wouldnt trust machine translation.