Nam D. Tran
Nam D. Tran
Try this: ```python with torch.no_grad(): outputs = self.model(img) outputs = postprocess( outputs, self.exp.num_classes, self.exp.test_conf, self.exp.nmsthre # TODO:用户可更改 ) if outputs[0] is not None: outputs = outputs[0].cpu().numpy() ```
Hi everyone, I have tried to make a PR to add AWQ. I really appreciate the comments to make it better, thanks! The PR: [Add AWQ](https://github.com/ggerganov/llama.cpp/pull/4593)
Hi everyone, I have tried to make a PR to add AWQ. I really appreciate the comments to make it better, thanks! [The PR](https://github.com/ggerganov/llama.cpp/pull/4593)
`Unsloth 2024.1 patched 16 layers with 16 QKV layers, 16 O layers and 16 MLP layers. [' Below is an instruction that describes a task, paired with an input that...
Thank you for your quick response @danielhanchen, I pull the latest and follow the installation steps for my T4 device: - Create conda - pip install "unsloth[colab] @ git+https://github.com/unslothai/unsloth.git" Run...
The output improved after I changed the prompt to the chat template. Using the default (you provide above): ` You are a friendly chatbot who always responds in the style...
Yes thank @danielhanchen, I add the `do_sample=True, temperature=0.7, top_k=50, top_p=0.95` to the test script before finetuning and it output correctly. Will be updating my model based on the chat template....
It's ok man, your speed is still god 💯
Can you share a bit about how to speed up (if it is your secret source then no need), I check it and it faster, only need to apply `FastLanguageModel.for_inference(model)`...
They support a lot of features, here is the list we can use to improve our temi: - temi can play apps and open website - temi support function calling...