handled the case for T5 tokenizers
Hi, thanks for the great work!
I was working with some evaluation models that use ProtT5 and Ankh PLMs, and these models do not have a
Decoded example for both ProtT5 and Ankh: MQMLKMGLV</s>
Hi, thank you for this pull request!
Having this logic (for instances with _compute_gradients might be problematic: https://github.com/NREL/EvoProtGrad/pull/16/files#r2442682916
I'm thinking that we could move this logic out of _compute_gradients and into the expert's code, that way we can let each expert handle removal of its special tokens (__call__ function.
I think we would need to also add custom expert code for T5/Ankh though!