Hope add qwen2.5-coder npu optimized
Thansk for the deepseek 7b and 14b npu optimized models on x elite. I hope theres' qwen2.5-coder npu optimized version which work better In my use case .
Thank you for your feedback. We are currently working on this model, and there will be an update later.
@luhan2017 @vriveras @timenick @hibrenda Thanks for your work on this! Could you please ensure that tool/function calling support is included for agents across these model sizes β 7B, 14B, and 32B (Instruct instead of the base model) β prioritizing the larger sizes first (starting with 32B)? Also, please document the quantization used in the model card. @xgdgsc If possible, could you update your original post to reflect this request as well?
Really excited to see this come together β thanks again! π
Thank you for your feedback. We've added this to our backlog for future consideration. While we canβt commit to a timeline right now, your input helps us prioritize improvements.