Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Grok has a 2M context window for most of their models.

For example their latest model `grok-4-1-fast-reasoning`:

- Context window: 2M

- Rate limits: 4M tokens per minute, 480 requests per minute

- Pricing: $0.20/M input $0.50/M output

Grok is not as good in coding as Claude for example. But for researching stuff it is incredible. While they have a model for coding now, did not try that one out yet.

https://docs.x.ai/developers/models

 help



What kind of research do you use it for?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: