Results 6 issues of kimm240

Hello, I’m working on a stencil-based AI Engine kernel where I want to implement a sliding window using local memory. My goal is to reuse previously loaded rows and update...

Currently, the FuseReductionEpilogue primitive only supports Bias (addition) and BiasReLU (addition + ReLU) epilogue patterns. However, clipping operations (min(max(x, lower), upper)) are commonly used in deep learning models and would...

The FuseReductionEpilogue primitive currently supports fusing bias addition epilogues into reduction blocks. This commit extends the primitive to also support ReLU activation functions in epilogue blocks, enabling fusion of patterns...

This commit extends the make_fused_bias_activation_pattern function to support PyTorch frontend's specific IR generation pattern for convolution operations with bias. When PyTorch models with bias=True are converted to Relax IR, the...

This PR introduces an operator fusion for the common `conv2d` followed by `reshape`, `add`, and `relu` sequence, commonly found in deep learning models (e.g., convolution + bias + activation pattern...

## Overview The Linalg pipeline transforms ONNX models into executable LLVM IR through a series of dialect conversions and optimizations. This document provides a detailed breakdown of all passes applied...