MACE icon indicating copy to clipboard operation
MACE copied to clipboard

Meeting Error when installing recognize-anything

Open datar001 opened this issue 1 year ago • 6 comments

Hello, I have installed most dependencies following your instructions. But in the last step installing recognize-anything by the command "pip install -e ./recognize-anything/", it meets an error:

image

Could you have some suggestions about it? Current environment is as follows: image image image

datar001 avatar Apr 23 '24 08:04 datar001

You may check the instructions from Grounded-SAM for more details

Shilin-LU avatar Apr 24 '24 08:04 Shilin-LU

I find that this problem does not hinder running codes. It can be able to train and evaluate the project. However, there are two additional minor errors in Line 51 of fuse_lora_close_form.py:

  1. The saved checkpoint in stage 1 & 2 (CFR and LoRA training) is old format, which is not loaded by new-version diffusers. Therefore, it should add the code to convert the old format to new format as follows: image
  2. In the same line, the folder name of the saved checkpoint is separated by space. So it does not add the post-process code ".replace(' ', '-')".

datar001 avatar Apr 24 '24 08:04 datar001

Today I trained a MACE following the default setting, and modified the "exase_explicit_content.yaml" for setting my concepts: image After obtaining the trained checkpoint, I first evaluate the impact of the unlearning strategy on the "normal" prompt~(i.e., those not containing erased concepts). But I find a significant negative impact on these prompts. Some examples are as follows~(The generated images have the same seed): image image image

Could you help me with this phenomenon?

datar001 avatar Apr 24 '24 12:04 datar001

Hi, to fit your case, probably you can try to reduce the max_training_step from 120 to 50-60, and increase both the train_preserve_scale and fuse_preserve_scale to 1e-4 or 1e-5.

Shilin-LU avatar Apr 24 '24 13:04 Shilin-LU

I re-trained a model following the recommended settings. Although the problem seems to have eased a bit, the negative impact is still significant. For example: image image

datar001 avatar Apr 24 '24 14:04 datar001

Try to further increase the train_preserve_scale and fuse_preserve_scale to 1e-3 or 1e-2. There is a tradeoff between generality and specificity.

Shilin-LU avatar Apr 24 '24 14:04 Shilin-LU