DataLearner logoDataLearnerAI
Latest AI Insights
Model Evaluations
Model Directory
Model Comparison
Resource Center
Tool Directory

加载中...

DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

产品

  • Leaderboards
  • 模型对比
  • Datasets

资源

  • Tutorials
  • Editorial
  • Tool directory

关于

  • 关于我们
  • 隐私政策
  • 数据收集方法
  • 联系我们

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

隐私政策服务条款
Page navigation
目录
Model catalogDeepSeek LLM 7B Chat
DE

DeepSeek LLM 7B Chat

DeepSeek LLM 7B Chat

Release date: 2023-11-29更新于: 2024-01-11 13:17:52.618692
Live demoGitHubHugging FaceCompare
Parameters
70.0亿
Context length
4K
Chinese support
Supported
Reasoning ability

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

DeepSeek LLM 7B Chat

Model basics

Reasoning traces
Not supported
Context length
4K tokens
Max output length
No data
Model type
聊天大模型
Release date
2023-11-29
Model file size
13.82GB
MoE architecture
No
Total params / Active params
70.0B / N/A
Knowledge cutoff
No data
Inference modes
No mode data
DeepSeek LLM 7B Chat

Open source & experience

Code license
MIT License
Weights license
DEEPSEEK LICENSE AGREEMENT- 免费商用授权
GitHub repo
https://github.com/deepseek-ai/DeepSeek-LLM
Hugging Face
https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat
Live demo
No live demo
DeepSeek LLM 7B Chat

Official resources

Paper
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
DataLearnerAI blog
No blog post yet
DeepSeek LLM 7B Chat

API details

API speed
No data
No public API pricing yet.
DeepSeek LLM 7B Chat

Benchmark Results

No benchmark data to show.
DeepSeek LLM 7B Chat

Publisher

DeepSeek-AI
DeepSeek-AI
View publisher details
DeepSeek LLM 7B Chat

Model Overview

DeepSeek LLM 7B Chat是DeepSeekAI开源的一个大语言模型,是基于DeepSeek LLM 7B Base版本做聊天优化对齐得到的版本。DeepSeekAI是中国知名私募幻方量化旗下的一个人工智能大模型企业。


DeepSeek LLM 7B Chat是他们开源的一个70亿参数版本的大语言模型,其效果与LLaMA2-7B差不多。但是中文任务评测结果明显好于LLaMA2-7B。DeepSeek LLM模型本身包含了四个版本,2个参数规模等级,分别是7B和67B,而模型本身区分Base版本的基座模型和对话调优的Chat版本。


DeepSeek LLM 7B Chat这里说的是70亿参数的聊天优化的版本。DeepSeekLLM四个版本的评测数据如下:

ModelTriviaQAMMLUGSM8KHumanEvalBBHC-EvalCMMLUChineseQA
DeepSeek LLM 7B Base59.748.217.426.239.545.047.278.0
DeepSeek LLM 67B Base78.971.363.442.768.766.170.887.6
DeepSeek LLM 7B Chat57.949.462.648.242.347.049.775.0
DeepSeek LLM 67B Chat81.571.184.173.871.765.267.885.1


DeepSeek LLM 7B Base版本参考: https://www.datalearner.com/ai-models/pretrained-models/deepseek-llm-7b-base 

DataLearner 官方微信

欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送

DataLearner 官方微信二维码