Atif Ghogha

Results 1 issues of Atif Ghogha

try: from torch.nn.attention.flex_attention import ( flex_attention as _flex_attention, create_block_mask as _create_block_mask, ) _flex_attention = torch.compile(_flex_attention, dynamic = True, options = torch_compile_options) HAS_FLEX_ATTENTION = False except: HAS_FLEX_ATTENTION = False pass I...