[Bug] The slave's accessMessageInMemoryMaxRatio should be calculated dynamically
Before Creating the Bug Report
-
[x] I found a bug, not just asking a question, which should be created in GitHub Discussions.
-
[x] I have searched the GitHub Issues and GitHub Discussions of this repository and believe that this is not a duplicate.
-
[x] I have confirmed that this bug belongs to the current repository, not other repositories of RocketMQ.
Runtime platform environment
OS: CentOS 6.9
RocketMQ version
branch: (develop|tag 5.3.1) version: 5.3.1
JDK Version
JDK: 1.8.0_202
Describe the Bug
The slave's accessMessageInMemoryMaxRatio may become negative in some cases, causing the consumer to consume from the slave all the time.
Steps to Reproduce
- Deploy one master and one slave.
- Update slave config except accessMessageInMemoryMaxRatio.
- Restart slave
If you loop through the 2~3 steps, accessMessageInMemoryMaxRatio will become negative.
What Did You Expect to See?
accessMessageInMemoryMaxRatio should be 30
What Did You See Instead?
accessMessageInMemoryMaxRatio had changed to 20
Additional Context
No response
Bug Report: Slave's accessMessageInMemoryMaxRatio becomes negative
Environment
- OS: CentOS 6.9
- RocketMQ version: 5.3.1 (branch/tag: develop)
- JDK Version: 1.8.0_202
Description
The slave's accessMessageInMemoryMaxRatio parameter may become negative in certain scenarios, causing the consumer to consistently consume from the slave when it shouldn't.
Steps to Reproduce
- Deploy one master and one slave configuration
- Update slave config (excluding
accessMessageInMemoryMaxRatio) - Restart the slave
- Repeat steps 2-3 in a loop
After several iterations, the accessMessageInMemoryMaxRatio becomes negative.
Expected Behavior
The accessMessageInMemoryMaxRatio should maintain its default value of 30 (or whatever configured value was set).
Actual Behavior The value changes to 20 and eventually becomes negative.
Additional Context This issue appears during configuration updates and restarts of the slave node. The negative value causes improper consumer behavior where messages are always consumed from the slave.
Possible Impact
- Consumers may get incorrect message ordering
- System may experience performance degradation
- Slave node might be overloaded with read requests
Some suggestions to improve your bug report:
- Add more specific details about the configuration changes you're making in step 2
- Include any relevant logs or error messages you've observed
- Specify how many iterations are typically needed to reproduce the issue
- If possible, include information about whether this affects message persistence or just consumption
- Consider adding whether this happens with specific topic/queue configurations