WizardCoder-15B-V1.0
不支持
2K tokens
编程大模型
2023-06-14
31GB
微软开源的编程大模型,基于StarCoder微调: https://www.datalearner.com/ai-models/pretrained-models/StarCoder
在HumanEval Pass@1的评测上得分57.3,是开源模型里面最高结果,接近GPT-3.5。
| Model | HumanEval Pass@1 | MBPP Pass@1 |
|---|---|---|
| CodeGen-16B-Multi | 18.3 | 20.9 |
| CodeGeeX | 22.9 | 24.4 |
| LLaMA-33B | 21.7 | 30.2 |
| LLaMA-65B | 23.7 | 37.7 |
| PaLM-540B | 26.2 | 36.8 |
| PaLM-Coder-540B | 36.0 | 47.0 |
| PaLM 2-S | 37.6 | 50.0 |
| CodeGen-16B-Mono | 29.3 | 35.3 |
| Code-Cushman-001 | 33.5 | 45.9 |
| StarCoder-15B | 33.6 | 43.6* |
| InstructCodeT5+ | 35.0 | -- |
| WizardLM-30B 1.0 | 37.8 | -- |
| WizardCoder-15B 1.0 | 57.3 | 51.8 |
关注DataLearnerAI微信公众号,接受最新大模型资讯