Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How would you respond to the central premise of the article? Which I understood as:

* There may not be a lot of differentiation between different LLMs in the long run

* Where there is differentiation, is in data (both the data used to train it and the data provided within its context window for a given query)

* Ergo marrying search to the LLM, while currently in its infancy, will be a big deal and a big differentiator -- because if you can quickly find the right data to pack into the context window, you will get much better results than what we're seeing today.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: