Code Llama 7b Python pricing
If you are budgeting for Code Llama 7b Python, start with the numbers below. We index 1 provider price. Cheapest input is $0.200 per million tokens. Cheapest output is $0.200 per million tokens. The model lists a 16K context window in our data.
Pricing across providers
Use this table to read Code Llama 7b Python list prices. We show 1 source right now. Lowest input in the grid: Fireworks AI. The chart below the table helps when output prices are much higher than input prices.
| Provider | Input / 1M | Output / 1M | Cached input | Batch |
|---|---|---|---|---|
FA Fireworks AI | $0.200 | $0.200 | — | — |
Input vs output · 1M tokens
Cost calculator
Pick any provider row and type how many tokens you expect per day, week, or year. We turn that into rough dollar totals for Code Llama 7b Python.
0.020000¢ / req
0.010000¢ / req
Model specifications
Context length, caps, and capability flags for Code Llama 7b Python. Values follow the main provider (Meta) record in our index.
- Context window
- 16,384 tokens
- Max output
- 16,384 tokens
- Vision (images)
- No
- Tool / function calling
- No
- Streaming
- No
- Released
- N/A
- Primary provider
- Meta
- Model family
- N/A
Compare Code Llama 7b Python
Jump into a comparison when you want one table for two models instead of two tabs. 6 curated matches for Code Llama 7b Python.
- Code Llama 7b Python vs Llama 3.1 70B
Compare pricing side by side
- Code Llama 7b Python vs Llama 3.1 8B
Compare pricing side by side
- Code Llama 7b Python vs GPT-4o
Compare pricing side by side
- Code Llama 7b Python vs GPT-4o mini
Compare pricing side by side
- Code Llama 7b Python vs Claude Sonnet 4.6
Compare pricing side by side
- Code Llama 7b Python vs Gemini 2.0 Flash
Compare pricing side by side
Frequently asked questions
Quick frequently asked items for Code Llama 7b Python pricing and limits.
Also from Meta
Other models by Meta with live pricing in our catalog.