Kim Hee Su

Results 12 comments of Kim Hee Su

I take a close look at this code. One json file include some information of many person. The index of certain person is determined by parameter idx from single_pose_dict2np(person_dict, idx)...

Does `dustynv/nano_llm:r36.4.0` support NanoLLM 24.8? I run nano_llm.vision.video with the `dustynv/nano_llm:r36.4.0` container. But I still see a steady increase in RAM usage. Next, I tried the same command with the...

In `nano_llm.vision.video` ```python ... while True: if last_image is None: continue chat_history.append('user', text=f'Image {num_images + 1}:') chat_history.append('user', image=last_image) last_image = None num_images += 1 for prompt in prompts: chat_history.append('user', prompt)...

I think he wants to use Qwen2.5-"VL"(VLM), not Qwen2.5(LLM). In Jetson AI Lab, it only support LLM models. There is no VLM models. The supported VLM are VILA-v1.5, VILA-v1.0, LLaVA-v1.5,...

@Hugh-yw Use Jetpack 6.1 instead of Jetpack 5.1.4

> > [@Hugh-yw](https://github.com/Hugh-yw) Use Jetpack 6.1 instead of Jetpack 5.1.4 > > Is there any compatible 5.1.4? I don't know. Your cuda is 11.4. But the package you want to...

@JIA-HONG-CHU I use `dustynv/nano_llm:r36.4.0`. I manually install mlc-llm==0.19.0, tvm==0.19.0 on the nano_llm container

@JIA-HONG-CHU pip install -U mlc-llm tvm Also, I replace `/opt/mlc-llm` folder to the latest version. Maybe, `git clone --recursive [mlc-llm repository]` I don't exactly remember it.

Can I use NVILA in nano_llm container?