Alessandro Giagnorio
Alessandro Giagnorio
Thank you very much for the timely response. What I wish to get from the output is: - figure out which lines have been changed - get the position of...
Thank you very much! This is exactly what I was looking for. I will try to use this solution you proposed :smile:
Since this functionality might be useful for others as well, I would like to point out changes to your code. Considering this input: ```python output = cd.difference( ''' String var...
@xidulu perfect, thanks for the answers! Then I assume that splitting a large KB unit in multiple and smaller ones may not always do the trick. Also, is the size...
@okhat , sorry to bother you, can I ask (for confirmation) whether in the provided example, the temperature set in the LLM configuration overrides that provided in the module?
@xaviermehaut not yet. My current use case consists of creating a custom module with several Predict / ChainOfThought modules, using the same LLM but with different temperatures. Imagine something like...
@chenmoneygithub thanks!
> Is it possible that you didn't have `EOS` tokens in the fine-tuning/dpo phase? Then the model wouldn't know what token to produce after the letter and just keep generating...
Small update: I have done more testing by reducing the learning rate and it seems to work much better than before. In any case, there are still instances in the...
Thank you for the support! I'll try as soon as possible and I'll keep you updated.