Shengyu Liu
Results
13
comments of
Shengyu Liu
Could you try to print `rope_scaling_factor`? It should be a `int`, but it seems that it is a `dict` from the error message
I don't think so. To answer your question, we should first know how PyTorch manages GPU memory. As far as I know, PyTorch has two allocation modes: either PyTorch allocates...
An example: