Llama-4-Maverick-17B-128E-Instruct
Llama-4-Maverick-17B-128E-Instruct is an AI model published by Facebook AI研究实验室, released on 2025-04-05, for 多模态大模型, with 4000.0B parameters, and 1000K tokens context length, requiring about 218GB storage, under the Llama4 License license.
Data sourced primarily from official releases (GitHub, Hugging Face, papers), then benchmark leaderboards, then third-party evaluators. Learn about our data methodology
Llama 4 Maverick Instruct currently shows benchmark results led by MMLU Pro (47 / 116, score 80.50), GPQA Diamond (103 / 166, score 69.80), LiveCodeBench (86 / 109, score 43.40). This page also consolidates core specs, context limits, and API pricing so you can evaluate the model from benchmark results and deployment constraints together.
Llama 4 Maverick是MetaAI开源的一款基于MoE架构的大模型。该模型拥有170亿活跃参数,并配备了128个专家单元,总参数量达到4000亿,其设计旨在通过专家路由机制提升模型在多模态任务中的性能,同时保持较高的计算效率。
Llama 4 Maverick Instruct是其中指令微调的版本,全称是Llama-4-Maverick-17B-128E-Instruct,其中128E就是128个专家。这个模型与另一个Llama4模型,即Llama 4 Scout架构一样,只是专家数量不同,进而总的参数量也不一样,效果也更好。
欢迎关注 DataLearner 官方微信,获得最新 AI 技术推送
