Okay, what's stopping you from feeding the code into an LLM and re-write it and make it yours? You can even add extra steps like make it analyze the code block by block then supervise it as it is rewriting it. Bam. AI age IP freedom.

Morals may stop you but other than that? IMHO all open source code is public domain code if anyone is willing to spend some AI tokens.

That would be a derivative work, and still be subject to the license terms and conditions, at best.

There are standard ways to approach this called clean room engineering.

https://en.m.wikipedia.org/wiki/Clean-room_design

One person reads the code and produces a detailed technical specification. Someone reviews it to ensure that there is nothing in there that could be classified as copyrighted material, then a third person (who has never seen the original code) implements the spec.

You could use an LLM at both stages, but you'd have to be able to prove that the LLM that does the implementation had no prior knowledge of the code in question... Which given how LLMs have been trained seems to me to be very dubious territory for now until that legal situation gets resolved.

AI is useful in Chinese walling code, but it’s not as easy as you make it sound. To stay out of legal trouble, you probably should refactor the code into a different language, then back into the target language. In the end, it turns into a process of being forced to understand the codebase and supervising its rewriting. I’ve translated libraries into another language using LLMs, I’d say that process was 1/2 the labor of writing it myself. So in the end, going 2 ways, you may as well rewrite the code yourself… but working with the LLM will make you familiar with the subject matter so you -could- rewrite the code, so I guess you could think of it as a sort of buggy tutorial process?

I am not sure even that is enough. You would really need to do a clean room reimplementation to be safe - for exactly the same reasons that people writing code write clean room reimplementations.

Yeah, the algorithms and program flow would have to be materially distinct to be really safe. Maybe switching language paradigms would get that for you in most cases? Js->haskell->js? Sounds like a nightmare lol.

Tell me you haven't used LLMs on large, non-trivial codebases without telling me... :)

Tell me you don't know how to use LLMs properly without telling me.

You don't give the whole codebase to an LLM and expect it to have one shot output. Instead, you break it down and and write the code block by block. Then the size if the codebase doesn't matter. You use the LLM as a tool, it is not supposed to replace you. You don't try to become George from Jetsons who is just pressing a button and doesn't touch anything, instead you are on top of it as the LLM does the coding. You test the code on every step to see if the implementation behaves as expected. Do enough of this and you have proper, full "bespoke" software.

I'll help you along - this is the core function that Kitten ends up calling. Good luck!

https://github.com/espeak-ng/espeak-ng/blob/a4ca101c99de3534...