x2x5

Results 7 issues of x2x5

I was really confused about when and how to place attn blocks in other people code or tutorial. and I found you just put it in ResBlock, and it's much...

I don't understand why use it. ```python class Downsample(nn.Module): def __init__(self, in_channels, with_conv): super().__init__() self.with_conv = with_conv if self.with_conv: # no asymmetric padding in torch conv, must do it ourselves...

在EEGNet模型中,如果卷积核的大小是偶数,通常采用的填充策略是卷积核大小的一半,即 padding = kernel_size // 2。这种填充会导致序列的时间维度(即 T)在卷积操作后增加1。在TensorFlow框架中,通过设置 padding='SAME' 可以自动实现期望的填充效果,即在左侧添加比右侧多一个单位的零填充,以保持输出尺寸与输入尺寸一致。而在PyTorch中,为了达到与TensorFlow相同的效果,可能需要在卷积层之前手动添加一个 nn.ZeroPad2d 层,其参数设置为 (kernel_size // 2, kernel_size // 2 - 1, 0, 0),以确保在两侧正确地填充零。

bug

the code is: ```python if LOW_RESOURCE: attn = self.forward(attn, is_cross, place_in_unet) else: h = attn.shape[0] attn[h // 2:] = self.forward(attn[h // 2:], is_cross, place_in_unet) ``` but I feel, if use...

**You mentioned in Section 3:** ![image](https://github.com/user-attachments/assets/6cc3a9a8-4cc5-4285-8bf5-837e59d8166f) Could you clarify why \( e_t \) and \( e_{t-1} \) are highly correlated? Is there any evidence to support this claim? From my...

I wonder the resource cost of this work. Because if the GPU memory and time cost required is too large, I feel it's hard for ordinary students to follow. I'll...

Hi, thanks for your great work for video editing. Could you provide complete yaml files and code for computing metrics for comparing baselines~ Thanks a lot~