TigerAI@Taiwan
TigerAI@Taiwan
**When running model w/ N-card was perfect. but something wrong w/A-card** llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes ggml_init_cublas: CUDA_USE_TENSOR_CORES: no ggml_init_cublas:...
Thanks for your information, I will try it and feedback soon
Hi dhiltgen, thnaks for your help, after export HSA_OVERRIDE_GFX_VERSION=11.0.0 and export HIP_VISIBLE_DEVICES=x still not working to me, Do you mean that issue is daul CPU ? rocm docker will work...
Bug found, ROCm docker ver 0.1.29 didnt support dual cpu w/ RX7900x4 BUT, after rollback to 0.1.28 , this supported anf ran pretty well w/ daul CPU
My friend said this feature is ready last week, https://github.com/liusida/ComfyUI-Login Let try it.
> hello all, > > i find the solution, please downgrade the tf. command pip install tensorflow==2.14 > > and edit the file requirements_linux.txt to tensorboard==2.14.0 tensorflow==2.14.0 > > because...
thanks you are definite right, llama3 is working fine, but other models like model fine tune by Chinese not working properly. our testing like https://ollama.com/ycchen/breeze-7b-instruct-v1_0
Thanks fo your all, finally it works now, A770 16G in WSL2 w/ubuntu 22.04 works fine BUT pretty slow 512*512 5 it/s takes 1min 48 sec -what I install sudo...
> Those warnings can be ignored... they're coming from the tensorflow package which is only there as a dependency for tensorboard. > > kohya uses PyTorch to train models. TensorRT...
no supprise , llava is aways poor in image recognition, is not ollama issue, maybe have to ask llava team.