Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It also includes o1 pro mode, a version of o1 that uses more compute to think harder

I like that this kind of verifies that OpenAI can simply adjust how much compute a request gets and still say you’re getting the full power of whatever model they’re running. I wouldn’t be surprised if the amount of compute allocated to “pro mode” is more or less equivalent to what was the standard free allocation given to models before they all got mysteriously dramatically stupider.



They are just feeding the sausage back into the machine over and over until it is more refined.


It is amazing that we are giving billions of dollars to a group of people that saw Human Centipede and thought “this is how we will cure cancer or make some engineering tasks easier or whatever”


This was part of the premise of o1 though, no? By encouraging the model to output shorter/longer chains of thought, you can scale model performance (and costs) down/up at inference time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: