Skip to content
星际流动

QuantCode-Bench: A Benchmark for Evaluating the Ability of Large Language Models to Generate Verifiable Code

发布
采集
学术前沿 5.5 分 — Code generation benchmark focused on verifiability, timely as code agent quality assurance matters more
原文: cs.CL updates on arXiv.org

评分 5.5 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-17

评分依据:Code generation benchmark focused on verifiability, timely as code agent quality assurance matters more

arXiv:2604.15151v1 Announce Type: new Abstract: Large language models have demonstrated strong performance on general-purpose programming tasks, yet their ability to generate executable algorithmic trading strategies remains underexplored. Unlike standard code benchmarks, trading-strategy generation requires simultaneous mastery of domain-specific financial logic, knowledge of a specialized API, and the ability to produce code that is not only syntactically correct but also leads to actual trades on historical data.