Haitao Xiao

Results 27 comments of Haitao Xiao

> ### 请提出你的问题 > Couldn't find 'eprstmt' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/eprstmt/eprstmt.py No module named 'paddlenlp.datasets.EPRSTMT'

> FewCLUE 中的 EPRSTMT 加载方式为 > > ``` > from paddlenlp.datasets import load_dataset > data_ds = load_dataset("fewclue", name="eprstmt", splits=["train_0"]) > ``` thankyou

> https://milvus.io/docs/install_standalone-helm.md#Connect-to-Milvus 你是用operator/helm安装的吗?其实你可以用同样的方式连接到milvus,因为它们都安装在k8s中 我得到了相同的问题。。

> A "prototype" torch.cond exists today: https://pytorch.org/docs/main/generated/torch.cond.html#torch.cond. Try it out and please let us know how it goes by opening issues over at github.com/pytorch/pytorch. cc @ydwu4 Hi,Can I avoid this...

> Hi, I'm not sure what's the meaning of "start from checkpoint". Did you mean continue training from the pre-trained checkpoint? Hi,没错QAQ 我可能不太专业哈,同样的基础模型,你们微调了2个阶段后,我只想微调你们用数据预训练后的基础模型,同样的型号,只要用同样的bash文件吗?QAQ 我是这么理解的。 比如LLAMA和miniGemini-Llama,我想微调的是miniGemini的-llama

> 同问 我估摸着是这样,还是等作者回答把

> ` from minigemini.model.builder import load_pretrained_model > > # loal Mini-Gemini-2B > tokenizer, model, image_processor, context_len = load_pretrained_model(model_path="work_dirs/Mini-Gemini-2B", model_base=None, model_name='YanweiLi/Mini-Gemini-2B', load_8bit=False, load_4bit=False, device_map="auto", device="cuda", use_flash_attn=False) ` https://huggingface.co/YanweiLi/Mini-Gemini-8x7B/blob/main/config.json [ elif "mixtral"...