GLM icon indicating copy to clipboard operation
GLM copied to clipboard

使用glm-2b时候,跟随readme提供的例子,得到很糟糕的输出

Open leekum2018 opened this issue 2 years ago • 2 comments

代码如下: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-2b", trust_remote_code=True) model = AutoModelForSeq2SeqLM.from_pretrained("THUDM/glm-2b", trust_remote_code=True) model = model.half().cuda() model.eval()

inputs = tokenizer("Ng is an adjunct professor at [MASK] (formerly associate professor and Director of its Stanford AI Lab or SAIL ). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai.", return_tensors="pt") inputs = tokenizer.build_inputs_for_generation(inputs, max_gen_length=512).to('cuda') outputs = model.generate(**inputs, max_length=512, eos_token_id=tokenizer.eop_token_id) print(tokenizer.decode(outputs[0].tolist()))

得到输出: Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Some weights of the model checkpoint at THUDM/glm-2b were not used when initializing GLMForConditionalGeneration: ['dense.weight', 'out_proj.bias', 'out_proj.weight', 'dense.bias']

  • This IS expected if you are initializing GLMForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing GLMForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results. Setting pad_token_id to eos_token_id:50258 for open-end generation. [CLS] Ng is an adjunct professor at [MASK] (formerly associate professor and Director of its Stanford AI Lab or SAIL ). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai.<|endoftext|> <|startofpiece|> 111111111 each1 when1111111 when11 each1 each1 each1 each1 each1 each1 each1 each1 each1 each1 each1 each1 each1 each111111111111111111111111111111111111111111111111111111 when11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

leekum2018 avatar May 26 '23 06:05 leekum2018

同样的问题,请问解决了吗

kolaen avatar Jun 02 '23 01:06 kolaen

同问

Jxxiang99 avatar Nov 09 '23 03:11 Jxxiang99