Ahmet Emir
Ahmet Emir
@lucidrains Hey Phil, thank you very much for your help! I'll try these parameters and post the results here.
@lucidrains We have changed the parameters as: ``` efficient_transformer = Linformer( dim=256, seq_len=197, depth=6, heads=8, k=64 ) # Visual Transformer model = ViT( dim=256, image_size=224, patch_size=16, num_classes=5, transformer=efficient_transformer, channels=1, ).to(device)...
@lucidrains **Thanks again!** We will try to find a larger dataset. By the way, these are validation results, not test results. So we wondered if there could be another problem...
@SuX97 We didn’t use anything special for tuning the learning rate, however I am not sure if this repo is coming with a default scheduler @lucid
@umbertov Thanks for the suggestion. Do you know if it is supported in this repo? @lucidrains
@lucidrains I just want to make sure: can I first do BYOL, and then try Distillation on top of it using this repo?
I am also trying to do the same, did you find a solution? @doglab753 @lucidrains
I have the same issue. Any solutions?
I found a temporary solution to this problem. Anyone having the same issue may use it: ``` def create_conditional_style(df): style=[] for col in df.columns: name_length = len(col) pixel = 50...
Hi @LyzhinIvan , thanks for your answers. This is my tune space for lossguide: ``` "cat_features": ["sector", "country"], "space": { 'iterations': hp.quniform('iterations', 200, 1700, 100), 'max_depth': hp.quniform('max_depth', 5, 16, 1),...