DataLearner logoDataLearnerAI
Latest AI Insights
Model Evaluations
Model Directory
Model Comparison
Resource Center
Tools
LanguageEnglish

加载中...

DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

Products

  • Leaderboards
  • Model comparison
  • Datasets

Resources

  • Tutorials
  • Editorial
  • Tool directory

Company

  • About
  • Privacy policy
  • Data methodology
  • Contact

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

Privacy policyTerms of service
Page navigation
目录
Model catalogGemma 4 31B
GE

Gemma 4 31B

Google Gemma 4 31B

Release date: 2026-04-02更新于: 2026-04-03 16:33:52244
Live demoGitHubHugging FaceCompare
Parameters
310.0亿
Context length
256K
Chinese support
Supported
Reasoning ability

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

Gemma 4 31B

Model basics

Reasoning traces
Supported
Thinking modes
Thinking Level · On (Default)Thinking Level · Off
Context length
256K tokens
Max output length
32768 tokens
Model type
推理大模型
Release date
2026-04-02
Model file size
62.6 GB
MoE architecture
No
Total params / Active params
310.0B / N/A
Knowledge cutoff
No data
Gemma 4 31B

Open source & experience

Code license
Apache 2.0
Weights license
Apache 2.0- 免费商用授权
GitHub repo
https://github.com/google/gemma
Hugging Face
https://huggingface.co/google/gemma-4-31b
Live demo
No live demo
Gemma 4 31B

Official resources

Paper
Gemma 4: Byte for byte, the most capable open models
DataLearnerAI blog
No blog post yet
Gemma 4 31B

API details

API speed
4/5
No public API pricing yet.
Gemma 4 31B

Benchmark Results

Gemma 4 31B currently shows benchmark results led by MMLU Pro (16 / 115, score 85.20), LiveCodeBench (21 / 108, score 80), GPQA Diamond (39 / 162, score 84.30). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.

Thinking
All modesThinking
Thinking mode details (1)
All thinking modesDefault (On)
Tool usage
All modesWith toolsNo tools
Internet
All modesOfflineInternet enabled

综合评估

1 evaluations
Benchmark / mode
Score
Rank/total
HLE
OnToolsInternet
26.50
51 / 119
View benchmark analysisCompare with other models
Gemma 4 31B

Publisher

Google Deep Mind
Google Deep Mind
View publisher details
Google Gemma 4 31B

Model Overview

Google Gemma 4 模型概览

2026年4月2日,Google DeepMind 携全系 Gemma 4 开源模型亮相。31B 是该系列的旗舰型号,采用了纯密集型网络(Dense)架构。它继承了与当前商业模型 Gemini 3 相同的先进底层研究成果,代表了当今开源生态中该尺寸级别内最强的前沿水平(Frontier AI)。

架构设计与硬件规格

  • 参数规模:总体参数量高达 310 亿。
  • 上下文窗口:具备完整的 256K 超大上下文处理能力。
  • 架构特点:融合了逐层残差嵌入(PLE)、全局注意力与共享 KV Cache,在无压缩保留密集型神经元优势的同时,优化了长序列内存开销。

多模态处理与核心能力

31B 专注于提供“开箱即用”的高保真知识与极深度的思维解析:

  • 原生多模态:可以准确读取并解析高分辨率视频帧、多维度图表以及结构复杂的跨模态文档,并具备原生语音识别系统。
  • 超强逻辑思维能力:模型底层启用了“思考模式”(Thinking Mode)。在数学推导、科学文献审读及系统级代码架构生成方面展现了惊人的自纠错和长时规划能力。

设备部署与推荐场景

  • 适用场景:适合对安全和准确率要求极为苛刻的企业级业务系统,包括数字主权要求高的大型跨国企业、科研机构和金融服务公司的数据清理和高精度问答场景。
  • 已知局限:由于是纯 310 亿参数满载激活运行,它在部署时对计算资源的消耗显著高于同系列的 MoE 版本,不建议在无独立大显存 GPU 支持的环境中部署。

DataLearner 官方微信

欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送

DataLearner 官方微信二维码