Zhanhui Zhou
Zhanhui Zhou
I am willing to help. I can run the test and do some preliminary experiments. I can also contribute to the implementation (let me know if needed) although I can...
Sure. I will have a try by early next week.
> Thanks a lot! Could you confirm that your machine is capable of running the model size XL first by running the following line. Please freely ask me if you...
@whpy Hi there, we’ve built [dllm](https://github.com/ZHZisZZ/dllm), a lightweight finetuning framework for diffusion language models on top of the [🤗 Transformers](https://github.com/huggingface/transformers) `Trainer`. Give it a try if you’d like to finetune...
@yangdongchao Hi there, we’ve built [dllm](https://github.com/ZHZisZZ/dllm), a lightweight finetuning framework for diffusion language models on top of the [🤗 Transformers](https://github.com/huggingface/transformers) `Trainer`. Give it a try if you’d like to finetune...
@devops724 @prp-e Hi there, we’ve built [dllm](https://github.com/ZHZisZZ/dllm), a lightweight finetuning framework for diffusion language models on top of the [🤗 Transformers](https://github.com/huggingface/transformers) `Trainer`. Give it a try if you’d like to...
@user50lab Hi there, we’ve built [dllm](https://github.com/ZHZisZZ/dllm), a lightweight finetuning framework for diffusion language models on top of the [🤗 Transformers](https://github.com/huggingface/transformers) `Trainer`. Give it a try if you’d like to finetune...
@duterscmy 你好,[dllm](https://github.com/ZHZisZZ/dllm) 为LLaDA提供了微调和batch推理的脚本,可以参考一下!