Improving LLM memory contention will allow LLMs to use more memory.