OpenAI: gpt-oss-20b
OpenAI • text • function-calling • json-mode
openai/gpt-oss-20bgpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.
Best For:
High-volume, low-latency tasks where cost efficiency is paramount
Pricing:
$0.00/1M input tokens, $0.00/1M output tokens
Context Window:
131,072 tokens
Key Differentiator:
Cost-optimized for high-volume usage