JingfanChen
JingfanChen
> > My second question is, Is there any way to do this conveniently? > > #403 Thank you for your quick reply!! The solution in #403 seems to be...
@zhangyunming @Adyoshkin @WXinlong Have you found a solution? I had the same problem.
Thanks for your reply!! I have another question about the training dataset: According to the paper, it utilizes **{20,205 prompts, 79,167 images}** to _train the classifier_; while utilizing **37,572 preferred...
@Timothyxxx @LukeForeverYoung Thanks for your reply. I have two questions: 1. Tasks in the AndoridWorld or other benchmarks involve information retrieval and require the history to contain not only the...
@robertgshaw2-neuralmagic Any update? Thanks.
@zachzzc @raywanb Still facing the same issue when adopt the following model: ``` self.vlm_model = LLM( model="openbmb/MiniCPM-V-2_6", max_model_len=4096, trust_remote_code=True, gpu_memory_utilization=0.5, enable_prefix_caching=True ) ``` My vllm version is ```0.5.4+cu122```. Here are...
> Can you share the input that caused this to error? @raywanb Here is my input: ``` sampling_params = SamplingParams(temperature=0.5, max_tokens=1, prompt_logprobs= 1, stop=["", ""]) messages = [{ 'role': 'user',...
升级sglang版本可以解决