DataLearner logoDataLearnerAI
Latest AI Insights
Model Evaluations
Model Directory
Model Comparison
Resource Center
Tool Directory

加载中...

DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

产品

  • Leaderboards
  • 模型对比
  • Datasets

资源

  • Tutorials
  • Editorial
  • Tool directory

关于

  • 关于我们
  • 隐私政策
  • 数据收集方法
  • 联系我们

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

隐私政策服务条款
Page navigation
目录
Model catalogOpenAssistant-Pythia
OP

OpenAssistant-Pythia

OpenAssistant-Pythia

Release date: 2023-04-03更新于: 2023-04-26 22:11:41.477479
Live demoGitHubHugging FaceCompare
Parameters
120.0亿
Context length
2K
Chinese support
Not supported
Reasoning ability

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

OpenAssistant-Pythia

Model basics

Reasoning traces
Not supported
Thinking modes
Thinking modes not supported
Context length
2K tokens
Max output length
No data
Model type
基础大模型
Release date
2023-04-03
Model file size
23
MoE architecture
No
Total params / Active params
120.0B / N/A
Knowledge cutoff
No data
OpenAssistant-Pythia

Open source & experience

Code license
No data
Weights license
No data
GitHub repo
https://github.com/LAION-AI/Open-Assistant
Hugging Face
https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5
Live demo
No live demo
OpenAssistant-Pythia

Official resources

Paper
No paper available
DataLearnerAI blog
No blog post yet
OpenAssistant-Pythia

API details

API speed
No data
No public API pricing yet.
OpenAssistant-Pythia

Benchmark Results

No benchmark data to show.
OpenAssistant-Pythia

Publisher

LAION AI
LAION AI
View publisher details
OpenAssistant-Pythia

Model Overview

OpenAssistant-Pythia是OpenAssistant系列中基于Pythia模型微调得到的结果。


Pythia是由EleutherAI开源的一组大模型(Pythia模型信息卡: https://www.datalearner.com/ai-models/pretrained-models/Pythia )。


目前,OpenAssistant基于Pythia微调的模型分为两类:一类是基于有监督学习微调的模型,名字带有sft,一类是基于奖励模型的微调,名字带有rm



模型名称参数大小说明
oasst-sft-1-pythia-12b120亿这是Open-Assistant项目的第一次迭代英语监督微调(supervised-fine-tuning,SFT)模型。它基于一个Pythia 12B模型,该模型在2023年3月7日之前通过https://open-assistant.io/人工反馈Web应用程序收集的约22,000个助手对话人类演示进行微调。
oasst-sft-4-pythia-12b-epoch-3.5120亿这是Open-Assistant项目的第四次迭代英语监督微调(SFT)模型。它基于一个Pythia 12B模型,该模型在2023年3月25日之前通过https://open-assistant.io/人工反馈Web应用程序收集的助手对话人类演示进行了微调。
oasst-rm-2.1-pythia-1.4b-epoch-2.514亿基于pythia-1.4b-gpt4all-pretrain微调结果
oasst-rm-2-pythia-6.9b-epoch-169亿基于pythia-6.9b-gpt4all-pretrain微调结果
oasst-rm-2.1-pythia-1.4b-epoch-2.514亿基于pythia-1.4b-gpt4all-pretrain微调结果


Foundation model

LLaMA
LLaMA
View details

DataLearner 官方微信

欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送

DataLearner 官方微信二维码