Yuchen Jin

Results 35 comments of Yuchen Jin

Hi @leandron, thanks for your feedback! :) We share a common goal of minimizing disruption while incrementally improving TVM. One of the main questions is how to bring in the...

Having taken onboard the feedback from community members (@leandron, @ekalda, @Mousius, @masahi), a number of us involved in this RFC (@YuchenJin, @jwfromm, @tqchen, @areusch, @mbaret, @jroesch, @tmoreau89) feel it’s necessary...

Thanks everyone for the feedback. One thing that we seem to agree on is that there is a strong need to support symbolic shape use cases for TVM, as represented...

There were concerns that bought up in [RFC #95](https://github.com/apache/tvm-rfcs/pull/95) that this RFC conversation did not cover "how proposal fit into TVM". We agree that discussing the fit is important and...

Right, in this case, we should throw an error when parsing. cc @yongwww We need to decide on if we want to do explicit type/shape casting(for example through match_shape) or...

Thanks everyone for the discussions! A brief recap of our discussions so far: - We are certain that Relax supports dynamic-shape workloads that are not supported by the current TVM,...

Thanks @psrivas2 for reporting the issue! Two questions that could help us know more about the context: - Is it hexagon specific, i.e. if we tune conv2d on cpu and...

Thanks everyone for the discussions at the community meeting today, and thanks @slyubomirsky and @psrivas2 for proposing alternative plans and summarizing the tradeoffs! Introducing a new type indeed needs careful...

Thanks for the discussions so far! > Are PrimValues mutable? E.g., can a PackedFunc mutate them in place? If we restrict the Relax-level operators to only take device-aware tensors, and...

> To be clear, this is proposing to replace the current use of `shape_`? I would definitely be in favor of having something that maps more clearly to the annotations...