Nemotron 3 Super (free) pricing
NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in complex multi-agent applications. Built on a hybrid Mamba-Transformer... This page tracks 1 listing in total. Highlighted lows are $0.0000 per million input and $0.0000 per million output (see table for which seller matches each).
Pricing across providers
All figures are list prices per million tokens unless a column says otherwise. 1 offer is listed for Nemotron 3 Super (free). Best input in this view: Openrouter.
| Provider | Input / 1M | Output / 1M | Cached input | Batch |
|---|---|---|---|---|
O Openrouter | $0.0000 | $0.0000 | — | — |
Input vs output · 1M tokens
Cost calculator
Pick any provider row and type how many tokens you expect per day, week, or year. We turn that into rough dollar totals for Nemotron 3 Super (free).
Model specifications
Context length, caps, and capability flags for Nemotron 3 Super (free). Values follow the main provider (Nvidia) record in our index.
- Context window
- 262,144 tokens
- Max output
- 262,144 tokens
- Vision (images)
- No
- Tool / function calling
- Yes
- Streaming
- Yes
- Released
- Mar 2026
- Primary provider
- Nvidia
- Model family
- N/A
Compare Nemotron 3 Super (free)
Jump into a comparison when you want one table for two models instead of two tabs. 6 curated matches for Nemotron 3 Super (free).
- Nemotron 3 Super (free) vs GPT-4o
Compare pricing side by side
- Nemotron 3 Super (free) vs GPT-4o mini
Compare pricing side by side
- Nemotron 3 Super (free) vs Claude Sonnet 4.6
Compare pricing side by side
- Nemotron 3 Super (free) vs Gemini 2.0 Flash
Compare pricing side by side
- Nemotron 3 Super (free) vs o3
Compare pricing side by side
- Nemotron 3 Super (free) vs Llama 3.1 70B
Compare pricing side by side
Frequently asked questions
Read these after the table if you want plain language around Nemotron 3 Super (free) rates. The short model note from our index: NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in complex multi-agent applications. Built on a hybrid Mamba-Transformer...
Also from Nvidia
Other models by Nvidia with live pricing in our catalog.