outbreak-sen
outbreak-sen
I use A3 and a linux computer . I wonder if the guidance still work and keep me move smoothly, when I get the authority and use flight control API
I want to use T265 with ros2 in ubuntu 22.04.So I install the realsenseSDK2.53.1 and successfully installed.But something wrong happened when I install the realsense-ros 4.51.1. The full text is...
I build it in linux. only to get ***Unknown option -qt=qt5. In other words, "qmake -qt=qt5" doesn't work. I wonder how to build since I am not familiar to Qt
实现了"albert/albert-base-v1"模型在"SetFit/20_newsgroups"数据集上的微调实验。 任务链接在https://gitee.com/mindspore/community/issues/IAUONP transformers+pytorch+4060的benchmark是自己编写的,仓库位于https://github.com/outbreak-sen/albert_finetuned 更改代码位于llm/finetune/albert,只包含mindnlp+mindspore的 实验结果如下 # Albert的20Newspaper微调 ## 硬件 资源规格:NPU: 1*Ascend-D910B(显存: 64GB), CPU: 24, 内存: 192GB 智算中心:武汉智算中心 镜像:mindspore_2_5_py311_cann8 torch训练硬件资源规格:Nvidia 3090 ## 模型与数据集 模型:"albert/albert-base-v1" 数据集:"SetFit/20_newsgroups" ## 训练与评估损失 由于训练的损失过长,只取最后十五个loss展示 ### mindspore+mindNLP |...
实现了bigbird_pegasus模型在databricks/databricks-dolly-15k数据集上的微调实验。 任务链接在https://gitee.com/mindspore/community/issues/IAUPBF transformers+pytorch+4060的benchmark是自己编写的,仓库位于https://github.com/outbreak-sen/bigbird_pegasus_finetune 更改代码位于llm/finetune/bigbird_prgasus,只包含mindnlp+mindspore的 实验结果如下 # bigbird_pegasus模型微调对比 ## train loss 对比微调训练的loss变化 | epoch | mindnlp+mindspore | transformer+torch(4060) | | ----- | ----------------- | ------------------------- | | 1 | 2.0958 |...
我已经下载了官网的SMPL模型,不过已经是1.1版本了,这个应该没影响 OFFICIAL_SMPL_PATH = '/home/outbreak/HPE/CircularWorld_smpl_smplify/code/models/basicmodel_f_lbs_10_207_0_v1.1.0.pkl' 然后运行了prepare_model 运行example时候报错: Loading model from ./model_files/prepared_smpl_male.pkl Traceback (most recent call last): File "example.py", line 46, in mesh.set_params(pose_pca=pose_pca, pose_glb=pose_glb, shape=shape) File "/home/outbreak/HPE/Minimal-IK/models.py", line 97, in set_params return...