Feeding more tokens into an LLM’s context window negatively impacts
performance. One study shows that accuracy drops from 95% to
60% ...