huang3eng
huang3eng
================================================================================= FAILURES ================================================================================= _____________________________________________________________________________ test_api_process _____________________________________________________________________________ client = def test_api_process(client): resp = client.get("/api/process") assert resp.status_code == 200 assert resp.headers["Content-Type"] == "application/json" resp_payload = json.loads(resp.data) assert len(resp_payload["processes"]) > 0 > assert...
### Describe the bug I pulled the fla latest code [2e73362](https://github.com/fla-org/flash-linear-attention/commit/2e7336262c11f8bc6cd6a94b1eb5ee353ae8b4cd) from the main branch and then installed flash-linear-attention from the source code. Used the code mentioned in other bug...
### Checklist - [x] I have checked [FAQs](https://github.com/fla-org/flash-linear-attention/blob/main/FAQs.md) and existing issues for similar problems - [x] Please report this bug in English to ensure wider understanding and support ### Describe...
### Checklist - [x] I have checked [FAQs](https://github.com/fla-org/flash-linear-attention/blob/main/FAQs.md) and existing issues for similar problems - [x] Please report this bug in English to ensure wider understanding and support ### Describe...
Why is the draft_vocab_size in configs/qwen3-30B-A3B-eagle3.json 32,000 instead of the vocab_size 151,936 in qwen3-30B-A3B?