Monami Banerjee

Results 8 issues of Monami Banerjee

The default output format in CoreNLPParser in 'penn'. How can I change the output format to 'wordsAndTags' or 'typedDependencies'?

Getting the error, `RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling 'cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)'` while trying the following training command:...

@YueLiao, Can you please mention the following versions used for this repository. - python - cuda - cython - opencv-python

- Where can I found the paper for VILA-1.5? - What visual encodes and LMs are used for VILA 1.5 3B, 8B, 13B, and 40B?

In `SetCriterionHOI` `__init__`, how the initial object (`self.obj_nums_init`) and verb (`self.verb_nums_init`) initial numbers are set? There two arrays for HICO-DET and V-COCO are hard coded [here](https://github.com/YueLiao/CDN/blob/main/models/hoi.py#L121). Are these just counts...

Where does ollama-python saves the pulled model using `ollama.pull('llava')`? I tried to set the environment variable `OLLAMA_MODELS` similar to Ollama-cli. But it is not using the path provided in `OLLAMA_MODELS`.

**Describe the bug** Getting the following error only by changing the model to `llava-onevision-qwen2-0_5b-ov` from `llava1_6-mistral-7b-instruct` in the first DPO example [here](https://github.com/modelscope/ms-swift/blob/main/docs/source_en/Multi-Modal/human-preference-alignment-training-documentation.md#dpo). **Command:** ``` CUDA_VISIBLE_DEVICES=0,1,2 \ swift rlhf \ --rlhf_type...

- I am trying to run inference with Cambrian-1-34B. - I have RTX 6000 GPUs with 48GBs. - I following [this inference script](https://github.com/cambrian-mllm/cambrian/blob/main/inference.py). The Cambrian-1-34B requires multiple GPUs to run....