
No ratings yet
Be the first to review this model
A powerful open-weight mixture-of-experts model that activates only a subset of parameters per token for efficient inference. Delivers strong performance across coding, math, and multilingual tasks while remaining cost-effective to run. Community-driven with broad hosting availability.
Released
April 17, 2024
Parameters
176B (MoE)
Context
64K
Pricing
Open Source
Last updated: March 15, 2026
Benchmark scores may vary based on evaluation methodology and conditions.