DataLearner logoDataLearnerAI
Latest AI Insights
Model Evaluations
Model Directory
Model Comparison
Resource Center
Tool Directory

加载中...

DataLearner logoDataLearner AI

A knowledge platform focused on LLM benchmarking, datasets, and practical instruction with continuously updated capability maps.

产品

  • Leaderboards
  • 模型对比
  • Datasets

资源

  • Tutorials
  • Editorial
  • Tool directory

关于

  • 关于我们
  • 隐私政策
  • 数据收集方法
  • 联系我们

© 2026 DataLearner AI. DataLearner curates industry data and case studies so researchers, enterprises, and developers can rely on trustworthy intelligence.

隐私政策服务条款
Page navigation
目录
Model catalogReplit-finetuned-v1-3b
RE

Replit-finetuned-v1-3b

Replit-finetuned-v1-3b

Release date: 2023-04-26更新于: 2023-04-27 22:54:16.977421
Live demoGitHubHugging FaceCompare
Parameters
27.0亿
Context length
2K
Chinese support
Not supported
Reasoning ability

Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology

Replit-finetuned-v1-3b

Model basics

Reasoning traces
Not supported
Context length
2K tokens
Max output length
No data
Model type
基础大模型
Release date
2023-04-26
Model file size
No data
MoE architecture
No
Total params / Active params
27.0B / N/A
Knowledge cutoff
No data
Inference modes
No mode data
Replit-finetuned-v1-3b

Open source & experience

Code license
No data
Weights license
No data
GitHub repo
GitHub link unavailable
Hugging Face
Hugging Face link unavailable
Live demo
No live demo
Replit-finetuned-v1-3b

Official resources

Paper
No paper available
DataLearnerAI blog
No blog post yet
Replit-finetuned-v1-3b

API details

API speed
No data
No public API pricing yet.
Replit-finetuned-v1-3b

Benchmark Results

No benchmark data to show.
Replit-finetuned-v1-3b

Publisher

Replit
Replit
View publisher details
Replit-finetuned-v1-3b

Model Overview

Replit-finetuned-v1-3b是Replit开发的一个编程大模型,与Replit-code-v1-3b一同宣布。官方确定Replit-code-v1-3b会是一个开源的模型,但是没有明确说Replit-finetuned-v1-3b是否开源。


Replit-code-v1-3b模型: https://www.datalearner.com/ai-models/pretrained-models/replit-code-v1-3b 


从官方提供的比较看,该模型与Replit-code-v1-3b可能一样,基于5250亿tokens的代码数据集微调得到,但是可能做了进一步的训练,其效果也是好于Replit-code-v1-3b。


其它信息暂未公布!

Foundation model

LLaMA
LLaMA
View details

DataLearner 官方微信

欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送

DataLearner 官方微信二维码