toomy0toons
toomy0toons
> later, in lines 78 and 79, there is a batch size of 32 hardcoded the batch size above is used to calculate total training epoch, becuase the training epoch...
I had to move to noVnc for the same issue. Trying to setup display for x11, or vnc caused so many problems, so i had to use noVnc to render...
for me it was cmake version issue i manually upgraded it to cmake 3.22 and it worked
h i tried llama-3 and may be you can use the setup. code is little dirty. first add template for llama3 in file. `prompt_template_utils.py` ``` def get_prompt_template(system_prompt=system_prompt, promptTemplate_type=None, history=False): if...
I did install `llama cpp` by the readme docs. i have cuda GPU so i installed the cublas version. ``` # Example: cuBLAS CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.83 --no-cache-dir ```...
@carloposo @KerenK-EXRM my understanding is that the instruct model (8b) has extra set of tokens or has diffenrent prompt template. try 7b models?
From what I understand the artifacts "white clouds" are nerf network trying to learn vast difference resulting from lights and reflective surfaces from outdoor, in the wild images. So you...
hi i also had to struggle trying to get things working in docker, and i would like to share my experience. i tried many methods, like getting a unofficial qt5...
sorry i have left installing part. go to nerfstudio folder and run pip install -e . for local installation of sdfstudio. follow the installation docs in sdfstudio. https://github.com/autonomousvision/sdfstudio#installing-sdfstudio
hi are you using a remote headless server? i find its impossible to initialize a gl context with headless server. try using cudarasterizer with max res 2048 and see if...