executorch
executorch copied to clipboard
On-device AI across mobile, embedded and edge for PyTorch
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #3022 Differential Revision: [D56070351](https://our.internmc.facebook.com/intern/diff/D56070351/)
Run with: ``` python -m examples.models.llama2.runner.generation --pte --tokenizer --prompt= ```
Summary: * Update tutorial due to recent changes. * Clean up setup.sh for app helper lib build. Pull Request resolved: https://github.com/pytorch/executorch/pull/2962 Reviewed By: cccclai Differential Revision: D55951189 Pulled By: kirklandsign...
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #3019
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #2958 * #2957 Differential Revision: [D55946527](https://our.internmc.facebook.com/intern/diff/D55946527/)
Hi there, I'm currently unable to get any further in the iOS Demo App Tutorial when running... ```bash sh backends/apple/coreml/scripts/install_requirements.sh ``` ...from [BUILDING AND RUNNING EXECUTORCH WITH CORE ML BACKEND](https://pytorch.org/executorch/stable/build-run-coreml.html#setting-up-your-developer-environment)....
1. AOT, generate qnn delegated model: python -m examples.models.llama2.export_llama --qnn --use_kv_cache -p /home/chenlai/models/stories110M/params.json -c /home/chenlai/models/stories110M/stories110M.pt 2. Runtime: follow [build_llama_android.sh](https://github.com/pytorch/executorch/blob/main/.ci/scripts/build_llama_android.sh) with QNN config on, then run: /llama_main --model_path=./stories_qnn_SM8450.pte --tokenizer_path=./tokenizer.bin --prompt="Once"
Summary: Test if CI passes with no change Differential Revision: D55995628
Summary: It's not obvious that there are two different versions of the documentation. Reviewed By: iseeyuan Differential Revision: D56018543