ContextBench
A scientific benchmark evaluating the dynamics of multi-file context retrieval in LLM agents.
Foundation Models
4
Best Pass@1
53.0%
Avg. Efficiency
0.599
Avg. Line F1
0.325
Benchmark Rankings
| Rank | ||||||
|---|---|---|---|---|---|---|
1 | Claude Sonnet 4.5 | 53.0% | 0.344 | 0.658 | $0.76 | |
2 | GPT-5 | 47.2% | 0.312 | 0.591 | $0.45 | |
3 | Devstral 2 | 40.2% | 0.332 | 0.616 | $0.91 | |
#4 | Gemini 2.5 Pro | 36.4% | 0.311 | 0.529 | $0.38 |
Note: Evaluations in this category are conducted based on our task-specific adaptations of the mini SWE-agent.
Key Findings
- 1
More scaffolding does not mean better context retrieval.
- 2
Even frontier LLMs struggle to retrieve precise code context.
- 3
LLMs favor recall over precision, introducing substantial noise.
- 4
Balanced retrieval achieves higher accuracy at lower cost.
- 5
Retrieved context is often not used in final solutions.
Abstract
LLM-based coding agents have shown strong performance on automated issue resolution benchmarks, yet existing evaluations largely focus on final task success, providing limited insight into how agents retrieve and use code context during problem solving. We introduce ContextBench, a process-oriented evaluation of context retrieval in coding agents. ContextBench consists of 1,136 issue-resolution tasks from 66 repositories across eight programming languages, each augmented with human-annotated gold contexts. We further implement an automated evaluation framework that tracks agent trajectories and measures context recall, precision, and efficiency throughout issue resolution. Using ContextBench, we evaluate four frontier LLMs and five coding agents. Our results show that sophisticated agent scaffolding yields only marginal gains in context retrieval ("The Bitter Lesson" of coding agents), LLMs consistently favor recall over precision, and substantial gaps exist between explored and utilized context. ContextBench augments existing end-to-end benchmarks with intermediate gold-context metrics that unbox the issue-resolution process. These contexts offer valuable intermediate signals for guiding LLM reasoning in software tasks.
Construction Pipeline
An overview of the ContextBench construction pipeline. ContextBench is curated through three key steps: Task Deduplication, Task Selection, and Expert Annotation.

1Task Deduplication
Removes exact and near-duplicate tasks from multiple issue resolution benchmarks using rule-based and embedding-based detection.
2Task Selection
Identifies challenging tasks based on agent solvability and the scope and dispersion of edits in ground-truth patches.
3Expert Annotation
Employs expert developers to trace code dependencies to construct gold contexts, validated through LLM-based patch generation.
Dataset Statistics
A repository-level benchmark spanning 8 programming languages and introducing human-verified gold contexts to expose intermediate context retrieval signals missing from final task resolution rate evaluation.
| Language | #Repo | #Task | #File | #Block | #Line |
|---|---|---|---|---|---|
| Python | 20 | 512 | 1,520 | 6,714 | 115,122 |
| Java | 6 | 57 | 262 | 3,030 | 49,057 |
| JavaScript | 9 | 153 | 819 | 3,949 | 87,907 |
| TypeScript | 8 | 119 | 537 | 1,106 | 40,621 |
| Go | 7 | 104 | 679 | 3,000 | 71,596 |
| Rust | 9 | 63 | 272 | 1,842 | 50,402 |
| C | 3 | 68 | 250 | 1,591 | 62,300 |
| C++ | 4 | 60 | 209 | 1,884 | 45,110 |
| Total | 66 | 1,136 | 4,548 | 23,116 | 522,115 |