lm_head and v_head, why re-initialize and why dropout?
First off, thank you for building this! 3 questions regarding the two heads of the policy model:
- why re-initialize the weights in the language model head in
class GPT2HeadWithValueModel
self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
when a trained lm_head already exist in GPT2LMHeadModel?
- why does the model still speak coherently before training even though the lm_head weights of the model are random?
from 01-gpt2-with-value-head.ipynb
My most favourite movie is Captain America: Civil War, which moved into the
My least favourite movie is Jon Favreau's Log Horizon, complete with psychedelic
- Why use dropout on your value ? value is not like the entire layer of a neural network where you dont want the model to reply too heavily on one activate, value is the one and only signal you get for that layer, so why drop it out?
(v_head): ValueHead(
(summary): Linear(in_features=768, out_features=1, bias=True)
(activation): Identity()
(first_dropout): Dropout(p=0.1, inplace=False)
(last_dropout): Identity()
(flatten): Flatten(start_dim=1, end_dim=-1)
)
Thanks again!
So I did some research on my own and basically my first 2 questions can be answered by looking at the huggingface transformers repository: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py
So I did some research on my own and basically my first 2 questions can be answered by looking at the huggingface transformers repository: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py
Hi @clam004, do you mind explaining your answer/understanding on why they do it? Thanks!
@danjohnvelasco as long as you use the same name self.lm_head, when you load the pretrained model from the dictionary of parameters, these linear parameters will be replaced with the trained ones. So thats why the model still works (question 2). Also regarding question 3, I suspect somehow it doesnt matter, although Im not sure why, cause when I run this repo without the dropout layer, as expected, it behaves the same.
Regarding 3 I agree and we moved the dropout before the linear layer in https://github.com/lvwerra/trl/pull/70.