HumanEval
一个包含 164 个手写编程问题的基准,用于评估模型生成代码的能力。
模型简称 | 得分 | 发布机构 | 发布时间 | 参数规模(亿) |
---|---|---|---|---|
OpenAI o3-mini (high) | 97.6 |
![]() |
2025-01-31 | 未知 |
Claude 3.5 Sonnet New | 93.7 |
![]() |
2024-10-22 | 0.0 |
OpenAI o1-mini | 92.4 |
![]() |
2024-09-12 | 未知 |
Claude 3.5 Sonnet | 92.0 |
![]() |
2024-06-21 | 未知 |
Hunyuan-TurboS | 91.0 |
![]() |
2025-03-10 | 未知 |
GPT-4o(2024-11-20) | 90.2 |
![]() |
2024-11-20 | 未知 |
GPT-4o | 90.0 |
![]() |
2024-05-13 | 未知 |
Amazon Nova Pro | 89.0 |
![]() |
2024-12-03 | 未知 |
Gemini 1.5 Pro | 89.0 |
![]() |
2024-02-15 | 0.0 |
DeepSeek-V3 | 89.0 |
![]() |
2024-12-26 | 6810.0 |
Llama3.1-405B Instruct | 89.0 |
![]() |
2024-07-23 | 4050.0 |
Mistral-Small-3.1-24B-Instruct-2503 | 88.41 |
![]() |
2025-03-17 | 240.0 |
Qwen2.5-32B | 88.4 |
![]() |
2024-09-18 | 320.0 |
Llama3.3-70B-Instruct | 88.4 |
![]() |
2024-12-06 | 700.0 |
Grok 2 | 88.4 |
|
2024-08-13 | 未知 |
Claude 3.5 Haiku | 88.1 |
![]() |
2024-10-22 | 0.0 |
Gemma 3 - 27B (IT) | 87.8 |
![]() |
2025-03-12 | 270.0 |
GPT-4o mini | 87.2 |
![]() |
2024-07-18 | 0.0 |
Claude3-Opus | 84.9 |
![]() |
2024-03-04 | 0.0 |
Llama3.1-70B-Instruct | 80.5 |
![]() |
2024-07-23 | 700.0 |
Phi-4-mini-instruct (3.8B) | 74.4 |
![]() |
2025-02-27 | 38.0 |
Grok-1.5 | 74.1 |
|
2024-03-29 | 未知 |
Qwen2.5-Max | 73.2 |
![]() |
2025-01-28 | 未知 |
Llama3.1-8B-Instruct | 66.5 |
![]() |
2024-07-23 | 80.0 |
C4AI Aya Vision 32B | 62.2 |
![]() |
2025-03-04 | 320.0 |
Qwen2.5-72B | 59.1 |
![]() |
2024-09-18 | 727.0 |
Qwen2.5-7B | 57.9 |
![]() |
2024-09-18 | 70.0 |
Moonlight-16B-A3B-Instruct | 48.1 |
![]() |
2025-02-23 | 160.0 |
Qwen2.5-3B | 42.1 |
![]() |
2024-09-18 | 30.0 |
Gemma 2 - 9B | 37.8 |
![]() |
2024-06-27 | 90.0 |
Llama3.1-8B | 33.5 |
![]() |
2024-07-23 | 80.0 |
Mistral-7B-Instruct-v0.3 | 29.3 |
![]() |
2024-05-22 | 70.0 |
Llama-3.2-3B | 28.0 |
![]() |
2024-09-18 | 32.0 |
QwQ-32B | 19.0 |
![]() |
2025-03-06 | 325.0 |