Francesco Ferrari
Francesco Ferrari
@klshuster Just following up on this to see if it might be an issue from my side or a bug. I noticed it happens when calling this `opt`: https://github.com/facebookresearch/ParlAI/blob/b1acb681207559da56a787ba96e16f0e23697d92/projects/bb3/agents/opt_bb3_agent.py#L216
Hi, I am using `clone`. I am currently able to run a few hundreds agents however I would still like to be able to free up the GPU memory once...
Yes, it OOMs at around 500 agents (model itself takes about 9GiB and running on a T4 with approximately 16GiB). And yes, the 4-7Mb does not get cleared out. Below...
Thank you for the reply! Will it need this much memory also for inference or is the 64Gb memory requirement only needed for training?
I have spin a new instance with 2 A100 for a total of 80Gb but the problem still seems to persist. Strangely it doesnt seem to be taking all the...
Yep, same exact command as above with the same params' values.
@jxmsML sorry for bringing this up again but I was wondering if the behavior explained above is either anomalous or I am simply not understanding how the search and memory...
I am using a custom script that heavily borrowed from `scripts/interactive_web.py` where I am passing personas as well. This is the command at inference: `python3.8 ParlAI/parlai/scripts/chat_model.py --model-file ParlAI/data/models/chatbot_model/model --init-opt gen/r2c2_bb3`...
But even using a simpler script with no personas such as `interactive.py` produces odd results (where the bot replies to itself and then return a response that would make sense...
@samuelazran of course! This is the existing script (except some parts that were omitted but not relevant to the generation itself): ``` from parlai.scripts.interactive import setup_args from parlai.core.agents import create_agent...