I really like the idea of aider but when I tried it, it didn't work. The first real life file I tried it on was too big and it just blew up. The second real life file I tried was still too big. I was surprised that aider doesn't seem to have the ability to break down a large file to fit into the token limit. GPT's token limit isn't a very big source file. If I have to both choose the files to operate on and do surgery on them so GPT doesn't barf, am I saving time vs. using Copilot in my IDE? Going into it, I had thought that coping with the "code size ≫ token limit" problem was aider's main contribution to the solution space but I seem to have been wrong about that.
I hope to try aider again but it's in the unfortunate category of "I have to find a problem and a codebase simple enough that aider can handle it" whereas Copilot and ChatGPT come to me where I am. Copilot and ChatGPT help me with my actual job on my real life codebase, warts and all, every day.
I'm sorry to hear you had a rough experience trying aider. Have you tried it since GPT-4 Turbo came out with the 128k context window? Running `aider --4-turbo` will use that and be able to handle larger individual source code files.
Aider helps a lot when your codebase is larger than the GPT context window, but the files that need to be edited do have to fit into the window. This is a fairly common situation, where your whole git repo is quite large but most/all of the individual files are reasonably sized.
Aider summarizes the relevant context of the whole repo [0] and shares it along with the files that need to be edited.
The plan is absolutely to solve the problem you describe, and allow GPT to work with individual files which won't fit into the context window. This is less pressing with 128k context now available in GPT 4 Turbo, but there are other benefits to not "over sharing" with GPT. Selective sharing will decrease token costs and likely help GPT focus on the task at hand and not become distracted/confused by a mountain of irrelevant code. Aider already does this sort of contextually aware selective sharing with the "repo map" [0], so the needed work is to extend that concept to a sub-file granularity.
[0] https://aider.chat/docs/repomap.html#using-a-repo-map-to-pro...
Try again since the token limit increased in November by a factor of 16 (128000 now for GPT-4 Turbo 1106 preview instead of 8000 for GPT 4).
Mind the cost though! A single request with a fully loaded 128k token context window to GPT-4 Turbo costs $1.28.