Not sure what you’re disagreeing with? Context window size limits are not artificial. It takes real time/money/resources to increase them.

There are a few ways to approach the problem. Pre-training on longer context lengths I’ve already mentioned. Fine-tuning techniques (like LongRoPE) I’ve already mentioned.

Inference time context extension tricks I didn’t mention because the papers I’ve seen seem to suggest there’s often problems with quality or unfavorable tradeoffs.

There’s no magic way around these limits, it’s a real engineering problem.