[NeMo-UX] Add PEFT
What does this PR do ?
Initial PR for PEFT in nemo 2.0
Collection: [Note which collection this PR will affect]
Changelog
- Add specific line by line info of high level changes in this PR.
Usage
- You can potentially add a usage example below
# Add a code snippet demonstrating how to use this
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR. To re-run CI remove and add the label again. To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
- [ ] Make sure you read and followed Contributor guidelines
- [ ] Did you write any new necessary tests?
- [ ] Did you add or update any necessary documentation?
- [ ] Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
- [ ] Reviewer: Does the PR have correct import guards for all optional libraries?
PR Type:
- [x] New Feature
- [ ] Bugfix
- [ ] Documentation
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed. Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information
- Related to # (issue)
RAPTOR uses llm to do summary. So, I think it's probably caused by the serving capability of LLM server in your local LAN.
RAPTOR uses llm to do summary. So, I think it's probably caused by the serving capability of LLM server in your local LAN.
I am unable to locate any relevant error or warning logs through docker logs -f ragflow-*. I suggest enhancing the logging system to provide more detailed information when the task status is abnormal.
RAPTOR uses llm to do summary. So, I think it's probably caused by the serving capability of LLM server in your local LAN.
After extensive debugging, I have finally discovered that the stalling of the file parsing task prior to raptor was due to the expiration of REDIS_CONN.queue_product.
/ragflow/rag/settings.py SVR_QUEUE_RETENTION = 60*60
To address this issue, I recommend adjusting the retention period for the server queue in /ragflow/rag/settings.py. Specifically, we should increase the SVR_QUEUE_RETENTION value to ensure that items in the queue do not expire prematurely.
I think this error still exists, WS=1 is expected to be stuck at the 20+ file, if WS>1 it will happen faster, and the worst part is there is no error message.
I think this bug still exists, WS=1 is expected to get 20+ files stuck, if WS>1 will speed up the process, worst of all there is no error message
Have you tried increasing the SVR_QUEUE_RETENTION value in /ragflow/rag/settings.py? It worked for me.
I think this bug still exists, WS=1 is expected to get 20+ files stuck, if WS>1 will speed up the process, worst of all there is no error message
Have you tried increasing the SVR_QUEUE_RETENTION value in /ragflow/rag/settings.py? It worked for me.
Thank you, it has solved the need for mass parsing.