[DRAFT] Add lita inference
What does this PR do ?
Notice this is only a draft.
- Convert LITA ckpt to nemo model
- Add LITA FAST & SLOW tokens
- Add LITA 1.5 img_vid_start_end mode
Collection: [multimodal]
Changelog
- Added LITA config file
- Added LitaWordEmbeddingMixin class
- Added LITA arguments for inference pipeline
- Verified inference result
Usage
Please refer to LITA.md for reference.
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR. To re-run CI remove and add the label again. To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
- [ ] Make sure you read and followed Contributor guidelines
- [ ] Did you write any new necessary tests?
- [ ] Did you add or update any necessary documentation?
- [ ] Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
- [ ] Reviewer: Does the PR have correct import guards for all optional libraries?
PR Type:
- [ ] New Feature
- [ ] Bugfix
- [ ] Documentation
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed. Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information
- Related to # (issue)
@yaoyu-33
Got a question is it normal that the NeMo/examples/multimodal/multimodal_llm/neva/conf/neva_inference.yaml seems to do streaming inference. And I see continuous calls/streaming calls to replace_media_embeddings with the below command:
neva_model_file=/ws/converted_nemo_model/lita-vicuna-v1-3-13b-finetune.nemo
prompt_file=/ws/test/prompt_file.json
output_file=/ws/test/output.json
video_base_path=/ws/test
torchrun --nproc_per_node=1 /ws/NeMo/examples/multimodal/multimodal_llm/neva/neva_evaluation.py \
--config-path=/opt/NeMo/examples/multimodal/multimodal_llm/neva/conf/ \
--config-name=neva_inference.yaml \
tensor_model_parallel_size=1 \
pipeline_model_parallel_size=1 \
neva_model_file=$neva_model_file \
trainer.devices=1 \
trainer.precision=16 \
prompt_file=$prompt_file \
inference.media_base_path=$video_base_path \
inference.media_type=video \
output_file=$output_file \
inference.temperature=0.2 \
inference.top_k=0 \
inference.top_p=0.9 \
inference.greedy=False \
inference.add_BOS=False \
inference.all_probs=False \
inference.repetition_penalty=1.2 \
inference.insert_media_token=right \
inference.tokens_to_generate=256 \
quantization.algorithm=awq \
quantization.enable=False