DataLearner logoDataLearnerAI
Latest AI Insights
Model Leaderboards
Benchmarks
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish
DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Page navigation
目录
Model catalogBaichuan2-7B-Base
BA

Baichuan2-7B-Base

基础大模型

Baichuan2-7B-Base

Release date: 2023-09-06更新于: 2023-09-09 10:23:42.136600
Live demoGitHubHugging FaceCompare
Parameters
7B
Context length
4K
Chinese support
Supported
Reasoning ability

Baichuan2-7B-Base is an AI model published by 百川智能, released on 2023-09-06, for 基础大模型, with 70.0B parameters, and 4K tokens context length, requiring about 15GB storage, under the Baichuan 2模型社区许可协议 license.

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

Baichuan2-7B-Base

Model basics

Reasoning traces
Not supported
Thinking modes
Thinking modes not supported
Context length
4K tokens
Max output length
No data
Model type
基础大模型
Release date
2023-09-06
Model file size
15GB
MoE architecture
No
Total params / Active params
7B / N/A
Knowledge cutoff
No data
Baichuan2-7B-Base

Open source & experience

Code license
Apache 2.0
Weights license
Baichuan 2模型社区许可协议- 免费商用授权
GitHub repo
https://github.com/baichuan-inc/Baichuan2
Hugging Face
https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
Live demo
No live demo
Baichuan2-7B-Base

Official resources

Paper
Baichuan 2: Open Large-scale Language Models
DataLearnerAI blog
No blog post yet
Baichuan2-7B-Base

API details

API speed
No data
No public API pricing yet.
Baichuan2-7B-Base

Benchmark Results

No benchmark data to show.
Baichuan2-7B-Base

Publisher

百川智能
百川智能
View publisher details
Baichuan2-7B-Base

Model Overview

Baichuan2-7B-Base是百川公司开源的百川系列大模型的第二代。相比较第一代的模型,第二代的Baichuan2-7B-Base在各方面都有较大的提升,Baichuan2系列包含3类:基础模型、微调(对齐)模型和量化版本的模型。其中Baichuan2-7B-Base是基础模型,70亿参数。


第二代的大模型在2.6万亿Tokens的高质量语料上训练,比第一代使用了更多的语料。


Baichuan2-7B-Base的推理显存需要15.3GB才能完成。


具体Baichuan2-7B的推理显存(包括量化版本)参考如下:

量化精度Baichuan2-7B
bf16 / fp1615.3
8bits8.0
4bits5.1


第二代百川大模型还有13B的版本,需要更高的显存,但是表现更好,Baichuan2-13B系列所需的推理显存参考 Baichuan2-13B-Chat的DataLearner模型信息卡 。



相比较第一代的 Baichuan 7B ,第二代模型在文本理解、推理能力、数学方面都有较大的提升。并且也是免费商用授权,但需要获得授权许可~


Baichuan2-7B-Base模型在MMLU、C-Eval、AGIEval和GSM8K上的表现参考DataLearner大模型评测综合排行: https://www.datalearner.com/ai-models/llm-evaluation 


Baichuan2系列模型的详细介绍,包括训练细节、数据集等参考DataLearner官方描述: https://www.datalearner.com/blog/1051694226173083 

DataLearner 官方微信

欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送

DataLearner 官方微信二维码