Mixtral 8x7B Instruct pricing
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe. Live index: 1 priced offer. Best input $0.500 per million tokens from Fireworks AI. Best output $0.500 per million tokens from Fireworks AI.
Pricing across providers
All figures are list prices per million tokens unless a column says otherwise. 1 offer is listed for Mixtral 8x7B Instruct. Best input in this view: Fireworks AI.
| Provider | Input / 1M | Output / 1M | Cached input | Batch |
|---|---|---|---|---|
FA Fireworks AI | $0.500 | $0.500 | — | — |
Input vs output · 1M tokens
Cost calculator
Pick any provider row and type how many tokens you expect per day, week, or year. We turn that into rough dollar totals for Mixtral 8x7B Instruct.
0.050000¢ / req
0.025000¢ / req
Model specifications
These fields describe Mixtral 8x7B Instruct as we store it (Family: Mixtral. source: Mistral AI). They sit next to price so buyers can check limits and tools in one place.
- Context window
- 32,768 tokens
- Max output
- 32,768 tokens
- Vision (images)
- No
- Tool / function calling
- Yes
- Streaming
- No
- Released
- Dec 2023
- Primary provider
- Mistral AI
- Model family
- Mixtral
Compare Mixtral 8x7B Instruct
These links open full side by side pages for Mixtral 8x7B Instruct. We picked pairs that people often shop together. 6 ready to open.
- Mixtral 8x7B Instruct vs Mistral 7B
Compare pricing side by side
- Mixtral 8x7B Instruct vs Mistral Large
Compare pricing side by side
- Mixtral 8x7B Instruct vs Mixtral 8x7B
Compare pricing side by side
- Mixtral 8x7B Instruct vs GPT-4o
Compare pricing side by side
- Mixtral 8x7B Instruct vs GPT-4o mini
Compare pricing side by side
- Mixtral 8x7B Instruct vs Claude Sonnet 4.6
Compare pricing side by side
Frequently asked questions
Quick frequently asked items for Mixtral 8x7B Instruct pricing and limits. The short model note from our index: Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct mode...
Also from Mistral AI
Other models by Mistral AI with live pricing in our catalog.