Shaswati Saha

Results 4 issues of Shaswati Saha

I am trying to reproduce the results in Figure 8 of the paper and getting the truth ratio as 1.0 for forget set (5%) for all the unlearning steps until...

I finetuned llama2 on the full dataset, ran gradient ascent on forget05, and then evaluated the unlearned model on forget05. Surprisingly, when I looked at the eval_log_forget.json file all I...

Getting below error while trying to train the finetuned (LoRA llama2) on forget set: `ValueError: Target module Dropout(p=0.05, inplace=False) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`,...

Hi, I know you've provided instructions on setting up the env, but I'm having issues preparing the data using transformers. Could you please share the configuration (e.g., transformers, diffusers version)...