加载中...
加载中...
How we collect and organize LLM and benchmark data
Last updated: 2025-01-20
DataLearnerAI is committed to providing accurate and reliable AI model information. This page explains our data collection process and source priorities.
We follow these principles to ensure data accuracy and authority:
We collect data according to the following priority:
Data directly from model publishers, including:
Official results from renowned benchmarks:
Data from reputable independent evaluation organizations:
When data from different sources conflict, we apply these strategies:
Officially published data has the highest authority.
Key data points include source references for user verification.
When differences are significant, we may show data from multiple sources.
We update data promptly as new information becomes available.
| Data Type | Description | Primary Source |
|---|---|---|
| Model Basic Info | Parameter count, context length, release date, licenses, etc. | Primarily from official GitHub/Hugging Face and papers |
| Benchmark Scores | Evaluation results from various benchmarks | Official published results preferred, then benchmark leaderboards |
| API Pricing | Model API pricing information | From official pricing pages, updated regularly |
| Performance Metrics | Inference speed, throughput, and other performance data | Official data or evaluators like Artificial Analysis |
If you find any data errors or have authoritative source suggestions, please use the contact options in the footer to reach us.