> For models with a 1.05M context window (GPT-5.4 and GPT-5.4 pro), prompts with >272K input tokens are priced at 2x input and 1.5x output for the full session for standard, batch, and flex.
Which, Claude has the same deal. You can get a 1M context window, but it's gonna cost ya. If you run /model in claude code, you get:
Switch between Claude models. Applies to this session and future Claude Code sessions. For other/previous model names, specify with --model.
1. Default (recommended) Opus 4.6 · Most capable for complex work
2. Opus (1M context) Opus 4.6 with 1M context · Billed as extra usage · $10/$37.50 per Mtok
3. Sonnet Sonnet 4.6 · Best for everyday tasks
4. Sonnet (1M context) Sonnet 4.6 with 1M context · Billed as extra usage · $6/$22.50 per Mtok
5. Haiku Haiku 4.5 · Fastest for quick answers
Is that why it says rate limit all the time if you switch to a 1M model on Claude now? It kept giving me that so I switched to API account over the weekend for some vibe coding ran up a huuuuge API bill by mistake, whooops.
> GPT‑5.4 in Codex includes experimental support for the 1M context window. Developers can try this by configuring model_context_window and model_auto_compact_token_limit. Requests that exceed the standard 272K context window count against usage limits at 2x the normal rate.
I can see that's what they mean now that I've read the replies, but when I first read that top comment I too parsed it as meaning 201k would cost the same as 999k (which admittedly did seem strange, hence I read the replies to confirm and sure enough that's not actually the case!)
I agree with the sentiment that companies should help fund open source they depend on, but I think it's a stretch to say those business succeeded "only" because of Tailwind. It's a great project, although I'm pretty sure they would have figured out a way to work with CSS without it.
Imagine wasting time on cloning the repo, making a change, pushing it, creating a PR, all that time wasted because of hate. He could use that time to fix one of the 38 issues reported by other users...
Disagreement is not hate. I do not agree with the person who wants to get rid of the Taiwanese flag since as far as I'm concerned Taiwan is a country but I do not hate him for his request.
Please don't use the word hate for these purposes as all that does is downplay cases of real hate.
If you are taking a slice of your free time just to go through this whole process to create a PR, and you know you will never get that time back, and your only message is: Taiwan is not a country, then either you are a troll or hate-fueled person. There is no middle ground, just read this part of their comment:
> TaiWan is not a country, is essentially a province of China. currently it's a district because of some history reasons.
> So, TaiWan flag should not be placed along with other REAL country flags, It's a big misleading to website visitors.
Cost of capital is still high. Risk free return rate is still high. It's amazing tech has lasted as well as it has. That said, there was a brief "flight to safety" to tech in '08 as well. I think Q3 is going to be painful.
> For models with a 1.05M context window (GPT-5.4 and GPT-5.4 pro), prompts with >272K input tokens are priced at 2x input and 1.5x output for the full session for standard, batch, and flex.
Taken from https://developers.openai.com/api/docs/models/gpt-5.4