GongCQ
GongCQ
### **the code is :** (package version: transformers==4.21.1 torch==1.11.0 deepspeed==0.6.5 cuda==11.3 GPU==RTX3090) ``` import torch from transformers import BertTokenizer, BartForConditionalGeneration, BertModel, BertLMHeadModel from transformers.activations import GELUActivation from deepspeed.profiling.flops_profiler import FlopsProfiler...
add parameter "strict=strict" when layer.load_state_dict() in PipelineModule.load_state_dir()
After a google search command was executed, the search result was saved into "full_message_history", but the search result would never be sent to ChatGPT by "openai.ChatCompletion.create()". ChatGPT would never know...
In mbart, there are two additional LayerNorm compared to bart. How can I modify the bart model in lightseq to support mbart? thanks!