Mingqiu Sun

Results 11 comments of Mingqiu Sun

@zeux what is your rational for using byte-shift on ARM? do you see a speedup over using word-shift on ARM? How much?

@nfrechette By WASM per platform, did you mean to increase the richness of WASM SIMD instruction sets, so that different WASM codes could be generated per platform for optimal performance?...

> I'd be curious to see how others feel about this though. I am not sure how useful it's been to have `wasi-nn` and `wasi-crypto` in this repository, I am...

@rjzak Adding training is always on the back of our minds. In order to support training, we need to expose cost function, variable set, and back propagation. So it is...

bfloat16 is also supported on CPU, via AVX512 on Intel for example (https://en.wikichip.org/wiki/x86/avx512_bf16). So both floating point formats need to be supported in my opinion.

Transformer networks for LLM take input sequences with a fixed length. So in that regard our current wasi-nn spec is sufficient. However, the data preprocessing part where text of arbitrary...

Usually, an LLM model expects input tensors in fixed shapes such as [batch, sequence, feature]. This maps well to our current spec for tensors. Maybe what is needed is a...

@WenheLI It is not in the scope of the current phase of the proposal, but if you have ideas of specific APIs that you would like to add, let's have...

@austinvhuang What we had done so far is that you don't need to compile any framework into Wasm, if all you want to do is to load a model and...

@austinvhuang Thanks for the good inputs. The rational for starting with influencing is that we want to support Wasm as a machine learning deployment vehicle for popular frameworks first. The...