Hacker Newsnew | past | comments | ask | show | jobs | submit | howdareme9's commentslogin

They are constantly training and getting rid of older models, they are losing money


Which part of "over model lifetime" did you not understand?


That's not a sufficient condition for profitability if both inference and scaling costs continue to increase over time.


have you got a link to this?


Sorry, I got the author wrong.

It's here: https://github.com/tmustier/pi-for-excel


5.2 Codex is up there with claude lmao


Agree, but it seems dependent on field. One day I wanted a browser extension made, and 5.2-codex-max added hundreds of lines of code several times, and for 15-20 iterations I did not change one thing, or even have an opinion on what it was doing. This is extremely uncommon for other models for me, even Opus I would say. And yes, I mostly do small green-field things and not even that works all the time, even if LLMs are clearly at their best there.


Not likely at all, people pay for convenience. They don't want to do that


Yeah hackernews users kept thinking the average consumers like to tinker like we do lol


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: