trl icon indicating copy to clipboard operation
trl copied to clipboard

lm_head and v_head, why re-initialize and why dropout?

Open clam004 opened this issue 3 years ago • 2 comments

First off, thank you for building this! 3 questions regarding the two heads of the policy model:

  1. why re-initialize the weights in the language model head in
class GPT2HeadWithValueModel

     self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)

when a trained lm_head already exist in GPT2LMHeadModel?

  1. why does the model still speak coherently before training even though the lm_head weights of the model are random?

from 01-gpt2-with-value-head.ipynb

My most favourite movie is Captain America: Civil War, which moved into the
My least favourite movie is Jon Favreau's Log Horizon, complete with psychedelic
  1. Why use dropout on your value ? value is not like the entire layer of a neural network where you dont want the model to reply too heavily on one activate, value is the one and only signal you get for that layer, so why drop it out?
  (v_head): ValueHead(
    (summary): Linear(in_features=768, out_features=1, bias=True)
    (activation): Identity()
    (first_dropout): Dropout(p=0.1, inplace=False)
    (last_dropout): Identity()
    (flatten): Flatten(start_dim=1, end_dim=-1)
  )

Thanks again!

clam004 avatar Aug 14 '22 05:08 clam004

So I did some research on my own and basically my first 2 questions can be answered by looking at the huggingface transformers repository: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py

clam004 avatar Aug 30 '22 22:08 clam004

So I did some research on my own and basically my first 2 questions can be answered by looking at the huggingface transformers repository: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py

Hi @clam004, do you mind explaining your answer/understanding on why they do it? Thanks!

danjohnvelasco avatar Sep 08 '22 01:09 danjohnvelasco

@danjohnvelasco as long as you use the same name self.lm_head, when you load the pretrained model from the dictionary of parameters, these linear parameters will be replaced with the trained ones. So thats why the model still works (question 2). Also regarding question 3, I suspect somehow it doesnt matter, although Im not sure why, cause when I run this repo without the dropout layer, as expected, it behaves the same.

clam004 avatar Dec 14 '22 22:12 clam004

Regarding 3 I agree and we moved the dropout before the linear layer in https://github.com/lvwerra/trl/pull/70.

lvwerra avatar Jan 13 '23 15:01 lvwerra