jyC23333
jyC23333
@20191864218 Hi, yes, I've already adapt the qwen model to llava. Many details should be noticed. I suggest you to follow this repo to adapt qwen to llava: https://github.com/Ucas-HaoranWei/Vary Vary...
You can check whether the pad_token is undefined in your tokenizer. When I adapt Qwen to llava, the pad_token is None in QwenTokenizer, which caused the same error.
The pad_token is used to make attention mask, just set the pad_token_id as -100 directly in the training code will solve it.
> at the end of the setup.py file, find: > > setup( ext_modules=cythonize(ext_modules), include_dirs=[numpy_include_dir], cmdclass={ 'build_ext': BuildExtension } ) > > add "include_dirs=[numpy_include_dir]," as above. > > That's how I...
Hi,@SkalskiP ,my code shows below: ```python import cv2 import maestro image = cv2.imread('./鲫鱼.png') generator = maestro.SegmentAnythingMarkGenerator(device='cuda') marks = generator.generate(image=image) marks = maestro.refine_marks(marks=marks) mark_visualizer = maestro.MarkVisualizer() marked_image = mark_visualizer.visualize(image=image, marks=marks) ```...
@SkalskiP Hi,the bug still exists with the latest version.  This is my cuda info:  And I'm using torch 2.1.0: 
@SkalskiP Hi,the dependent info is:  
Thanks so much.