this post was submitted on 07 Dec 2024
17 points (90.5% liked)
LocalLLaMA
2327 readers
28 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is making me realize that I don’t fully understand the relationship between “instruction-tuned” and “pre-trained”. I thought instruction tuning was a form of fine-tuning, and that fine-tuning comes after the primary training of the model.
A base-model / pre-trained is fed with a large dataset of random text files. Books, Wikipedia etc. After that the model can autocomplete text. And it has learned language and concepts about the world. But it won't answer your questions. It'll refine them, or think you're writing an email or long list of unanswered questions and write some more questions underneath, instead of engaging with you. Or think it's writing a novel and autocomplete "...that's what character asked while rolling their eyes." Or something completely arbitrary like that.
After that major first step it'll get fine-tuned to some task. The procedure is the same, it'll get fed different text in almost the same way. And this just continues the training. But now it's text that tunes it to it's role. For example be a Chatbot. It'll get lots of text that is a question, then a special character/token and then an answer to the question. And it'll learn to reply with an (correct) answer if you put in a question and that token. It'll probably also be fine-tuned to write dialogue as a Chatbot. And follow instructions. (And refuse some things and speak more unbiased, be nice...)
You can also put in domain-specific data, make it learn/focus on medicine... I think that's also called fine-tuning. But as far as I understand teaching knowledge with arbitrary data comes before teaching/tuning it to follow instructions, or it might forget that.
I think instruction tuning is a form of fine-tuning. It's just called that to distinguish it from other forms of fine-tuning. But I'm not really an expert on any of this.
I was also not sure what this meant, so I asked Google's Gemini, and I think this clears it up for me: