Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Anyone Succesfully fine-tuning LLMs?
3 points by Mythli 4 months ago | hide | past | favorite | 2 comments
Has anyone of you successfully fine-tuned a LLM?

I have done several attempts for simple use cases and the result was always extremely poor generalization.

Any experience, guides, examples would be valuable.



I have done my share of fine tuning on open source LLMs (e.g. Llama). I'm surprised you have very poor generalization.

I assume you're using standard techniques, like lora/qlora, which might leave room for issues with your data. Can you share more details on what is the format of your data points? like, Q/A, free text,...


I've tried it a few times without much success. I think it takes more data and discipline than most are prepared for.

RAG is a lot easier to reason about and much cheaper to iterate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: