模型优化问题
Describe the feature and the current behavior/state. 我想问一下是否可以考虑使用tensorflow-lite或者openvino作为推理后端,这样的话可以避免依赖整个tensorflow或pytorch库,也能提高加载速度,对于gpu资源不是很丰富的用户也更加友好,能够提升桌面cpu端的推理速度。 如果使用transformer的话,是否可以使用模型蒸馏减少堆叠层数(例如, DistilBert),同样能提升推理速度
Will this change the current api? How? no
Who will benefit with this feature? everyone who use this
Are you willing to contribute it (Yes/No): 可以做,但是可能没有足够的资源做
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Python version:
- HanLP version: v2.0.0-alpha.0
Any other info no
- [x] I've carefully completed this form.
Thank you for your advices. In alpha stage, I mostly focus on replicating/improving SOTA papers. I will look into the engineer part in the future. Deployment is not easy, contribution from everyone (especially industry users) is welcomed.
论坛上的相关讨论:https://bbs.hankcs.com/t/topic/3834