Kim Hee Su
Kim Hee Su
Thank you for your work. I have a questions about json data which are in pose/training/tracked_person. For example, 08_027_alphapose_tracked_person.json The abstract of this structure is as follows ``` └── root...
``` jetson-containers run $(autotag nano_llm) \ python3 -m nano_llm.agents.video_query --api=mlc \ --model Efficient-Large-Model/VILA1.5-13b \ --max-context-len 256 \ --max-new-tokens 32 \ --video-input \ --video-output rtsp://0.0.0.0:5020/out ``` I confirmed RTSP as --video-input....
This is my trial and error log to run NVILA. # First, Update package 1. mlc-llm==0.19.0, tvm==0.19.0 ``` pip install -U mlc-llm tvm ``` 2. use main branch of `/opt/mlc-llm`...
### Your current environment ============================== System Info ============================== OS : Ubuntu 22.04.4 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version : Could not collect CMake version : version...
I try to quantizae llava-v1.6-34b ``` python3 -m mlc_llm.build --model /data/models/mlc/dist/models/llava-v1.6-34b \ --quantization q4f16_ft \ --target cuda \ --use-cuda-graph \ --use-flash-attn-mqa \ --sep-embed \ --max-seq-len 256 --artifact-path /data/models/mlc/dist/llava-v1.6-34b/ctx256 \ --use-safetensors...
### Search before asking - [x] I have searched the jetson-containers [issues](https://github.com/dusty-nv/jetson-containers/issues) and found no similar feature requests. ### Question I used [VLM of Jetson Platform Service](https://docs.nvidia.com/jetson/jps/inference-services/vlm.html). In here, I...