zmwv823

Results 16 comments of zmwv823

Install custom node "ComfyUI_IPAdapter_plus" or "ComfyUI-InstantID"(8GB+ VRAM) ![image](https://github.com/comfyanonymous/ComfyUI/assets/13308350/2fc60357-edaf-4541-bc9d-cd1e7f33023a)

> That is the last version of Transformers that Transformers BLIP code works on, which is why it's pinned. A lot of people still use BLIP, and most can't run...

> The code I am using is not compatible. If you know how to PR an upgrade of the code to use high transformers that would be fine. But I...

> #393 > > Refactored BLIP! > > FYI the reason Art Venture's BLIP works is because it's literally the original BLIP code copy-pasted. It's not transformers based. **Thanks a...

确实哈,在mpv上能正常播放,potplayer就不行,哪怕调用lav也不行

> > 确实哈,在mpv上能正常播放,potplayer就不行,哪怕调用lav也不行 > > 切换lav解码应该可以正常看吧 你在滤镜哪里强制使用lav解码你试试看 ![Screenshot 2023-09-21 090345](https://github.com/Borber/seam/assets/13308350/6991d050-a665-4cef-9148-eaafe03b1d45) ![Screenshot 2023-09-21 090359](https://github.com/Borber/seam/assets/13308350/7e1b6804-7163-4731-b01b-7a88052332bf) ![Screenshot 2023-09-21 090444](https://github.com/Borber/seam/assets/13308350/5139f3ae-c9d9-4d80-9dea-43034760e2b0) 尽力了,还是不行

Install **llama_cpp_python** by using whl file from **https://github.com/abetlen/llama-cpp-python/releases**. Choose the right cuda version prebuillt whl for comfyui.

Look into the console log when you start comfyui. ![image](https://github.com/gokayfem/ComfyUI_VLM_nodes/assets/13308350/ce5b0480-65f9-4382-a670-096307a909c1)

**Check this folder.** ![image](https://github.com/gokayfem/ComfyUI_VLM_nodes/assets/13308350/bf30e9d5-69d3-44b3-8b0f-8314aceb1ea5) My comfyui env: - torch 2.2.2 +cuda 12.1.

> thats the error i keep getting ![image](https://private-user-images.githubusercontent.com/1458624/323068936-2f8a45eb-caab-4596-9f5b-374fe8c1c611.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTMzNDQ2NDcsIm5iZiI6MTcxMzM0NDM0NywicGF0aCI6Ii8xNDU4NjI0LzMyMzA2ODkzNi0yZjhhNDVlYi1jYWFiLTQ1OTYtOWY1Yi0zNzRmZThjMWM2MTEucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDQxNyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA0MTdUMDg1OTA3WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9OTMwZmJhN2NmYmM1ZmExNDVlYWExYjJiNWJhZGRhOTYwNTNiZTFkMTc3MGIyM2MwODY1OTU1YzMxYzBiNmZmMyZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.xpFTj19c7sJWo88zz6TWM-ne4lyOJQ37dl39efJ3OLs) ## Or try install with this whl file(without cuda): - https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.61/llama_cpp_python-0.2.61-cp311-cp311-win_amd64.whl - Command(on my laptop,you need to modify path): D:\AI\ComfyUI_windows_portable\python_embeded\a.exe -m pip...