> We'll need to figure out the techniques and strategies that let us merge AI code sight unseen.

Why do you assume that's doable? I'm not saying it's not, but it seems strange to just take for granted that it is.

Why do you assume I assume it's doable? :P

For real, I'm not certain we will ever be able to merge AI code without human review. But:

1. Every time I've confidently though "AI will never be able to do X" in the last year, I've later been proven wrong, so I'm a bit wary to assume that again without strong reasons.

2. I see blog posts by some of the most AI-forward people that seems to imply some people are already managing large codebases without human review of raw code. Maybe they're full of crap - there are certainly plenty of over-credulous bs artists in the AI space - but maybe they're not.

3. The returns on figuring this out are so incredibly high that, if it's possible, people will figure it out.

All that to say: it's far from certain, but my bias is that it is possible.

> Why do you assume I assume it's doable? :P

Because you say we need to figure out techniques to do it. If it's not possible, then there are no techniques to do it. Since you want the techniques, I assume you assume that they exist.

> 1. Every time I've confidently though "AI will never be able to do X" in the last year, I've later been proven wrong, so I'm a bit wary to assume that again without strong reasons.

That's evidence that you shouldn't assume something is impossible. I'm not suggesting that, either.

> 2. I see blog posts by some of the most AI-forward people that seems to imply some people are already managing large codebases without human review of raw code. Maybe they're full of crap - there are certainly plenty of over-credulous bs artists in the AI space - but maybe they're not.

Do you have any idea whether this works well though?

> 3. The returns on figuring this out are so incredibly high that, if it's possible, people will figure it out.

Ok. But again, that's a big if there.

The returns on breaking a popular cryptographic algorithm are also huge, but that's not an indication that it's possible, or that it's impossible for that matter.

I'm baffled why people think that "it would be great if..." has any bearing on the chances that the thing that follows is true.

1. Every time I've confidently stated "this AI architecture will never be able to do X" in the past 6 years, I've not been proven wrong (with one possible exception earlier today: https://news.ycombinator.com/item?id=47291893 – the jury's still out on that one). … No, my version doesn't really work, does it? It just sounds like bragging, or maybe hubris.

> some people are already managing large codebases without human review of raw code.

2. I have never believed this to be impossible. I do, however, maintain that these codebases are necessarily some combination of useless, plagiarism, and bloated. I have yet to see a case where there isn't a smaller, cheaper way to accomplish the same task faster and better.

> The returns on figuring this out are so incredibly high

3. And yet, they still haven't figured it out. My bias is that it isn't possible, because nothing has fundamentally changed about the model architectures since I first skimmed a PDF about GPT, and imagined an informal limiting proof that I still haven't found any holes in.