Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When you've trained your model on all available data, the only things left to improve are the training algorithm and the system prompt; the latter is far easier and faster to tweak. The system prompts may grow yet more, but they can't exceed the token limit. To exceed that limit, they may create topic-specific system prompts, selected by another, smaller system prompt, using the LLM twice:

user's-prompt + topic-picker-prompt -> LLM -> topic-specific-prompt -> LLM

This will enable the cumulative size of system prompts to exceed the LLM's token limit. But this will only occur if we happen to live in a net-funny universe, which physicists have not yet determined.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: