Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your biggest obstacle is proving fine-tuning is more effective than prompting, workflow design, RAG, etc during the initial pass. Most of my customers are still getting big improvements by picking the low-hanging fruit with those approaches. A much smaller fraction is at a place where they're ready to start fine-tuning. Obviously, this will change as AI programs mature.


Exactly! Finetuning needs at least 10 examples to even work. That’s why Promptrepo begins with prompting and schema-based generation when teams have little or no data. As they gather more examples, it gradually shifts to fine-tuning. It’s the classic cold start problem and we’ve simplified it for product teams who want to launch quickly but improve accuracy over time.


Can you share an example of such real world win where fine tuning was less effective ? I’m curious about sample business cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: