OpenAIGPT-4VisionTool use

GPT-4.1 Nano pricing

For tasks that demand low latency, GPT‑4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance at a small size with its 1 million token context window, and scores 80.1% on MMLU, 50.3% on GPQA, and 9.8% on Aider polyglot coding – even higher than GPT‑4o mini. It’s ideal for tasks like classification or autocompletion. Live index: 5 priced offers. Best input $0.100 per million tokens from Openrouter. Best output $0.400 per million tokens from Openrouter.

1.0M context·5 providers·verified May 2, 2026
Best input$0.100per 1M tokens · Openrouter
Best output$0.400per 1M tokens · Openrouter

Pricing across providers

Use this table to read GPT-4.1 Nano list prices. We show 5 sources right now. Lowest input in the grid: Openrouter. The chart below the table helps when output prices are much higher than input prices.

O
Openrouter
Input / 1M
$0.100
Output / 1M
$0.400
Cached in: $0.025
VA
Vercel Ai Gateway
Input / 1M
$0.100
Output / 1M
$0.400
Cached in: $0.025
A
Azure
Input / 1M
$0.100
Output / 1M
$0.400
Cached in: $0.025
O
OpenAInative
Input / 1M
$0.100
Output / 1M
$0.400
Cached in: $0.025
R
Replicate
Input / 1M
$0.100
Output / 1M
$0.400

Input vs output · per provider

Cost calculator

Pick any of the providers above and type how many tokens you expect per day, week, or year. We turn that into rough dollar totals for GPT-4.1 Nano.

Provider

In: $0.100/M·Out: $0.400/M·Cache: $0.025/M

0.010000¢ / req

0.020000¢ / req

Daily
$3.00
Monthly
$90
Annual
$1.1K

Model specifications

Context length, caps, and capability flags for GPT-4.1 Nano. Family: GPT-4. Values follow the main provider (OpenAI) record in our index.

Context window
1,047,576 tokens
Max output
32,768 tokens
Vision (images)
Yes
Tool / function calling
Yes
Streaming
No
Released
Apr 2025
Primary provider
OpenAI
Model family
GPT-4

Compare GPT-4.1 Nano

Open a pair page to see GPT-4.1 Nano next to another model with a shared provider matrix. 6 shortcuts below.

Frequently asked questions

Answers pull from the same numbers you see on this page. The short model note from our index: For tasks that demand low latency, GPT‑4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance at a small size with its 1 million token context window, and scores 80.1% on MM...

GPT-4.1 Nano costs $0.10 per million input tokens and $0.40 per million output tokens via the native API. Prompt caching reduces input costs to $0.03/M tokens.

Also from OpenAI

Other models by OpenAI with live pricing in our catalog.