AI攻城狮

Results 11 issues of AI攻城狮

您好,下面ppt中的proposal是如何产生的?是strong img aug-->student backbone-->student neck-->teacher head 还是strong img aug-->student backbone-->student neck-->student head ? ![image](https://user-images.githubusercontent.com/32134215/186302979-6af1c2a1-b03d-4bd1-97ec-b9cfbc8aa767.png)

![image](https://user-images.githubusercontent.com/32134215/167570144-0dc9e7ff-aa3a-47e2-ad6f-07a80248a61c.png)

我使用你的代码加载进android studio中,run后apk直接闪退,请问可能是什么原因呢?

我使用ncnn的mobilessd模型,直接ncnn2mem工具生成ssd.mem.h文件,替换您项目中的ssd-int8.mem.h文件,生成的app运行后仅仅把摄像头打开了,并没有显示目标框,我想问问为什么我这个替换模型步骤对吗?不对的话我该如何做?还望回复,万分感谢! ![image](https://user-images.githubusercontent.com/32134215/77721450-e5386280-7025-11ea-8785-254e6aa55ba6.png)

Lidar_AI_Solution/CUDA-BEVFusion/src/common/tensor.cu(135): error: calling a __device__ function("__half") from a __host__ function("arange_kernel_host") is not allowed 1 error detected in the compilation of "/tmp/tmpxft_00006d55_00000000-4_tensor.cpp4.ii". CMake Error at bevfusion_core_generated_tensor.cu.o.Release.cmake:280 (message): Error generating file /work/share/usr/cyy/BEV/Lidar_AI_Solution/CUDA-BEVFusion/build/CMakeFiles/bevfusion_core.dir/src/common/./bevfusion_core_generated_tensor.cu.o

File "ppq-0.6.6-py3.8.egg/ppq/parser/mnn_exporter.py", line 28, in export_onnx_quantization_config for cfg, var in op.config_with_variable: AttributeError: 'Operation' object has no attribute 'config_with_variable'

### Before Asking - [X] I have read the [README](https://github.com/tinyvision/DAMO-YOLO/blob/master/README.md) carefully. 我已经仔细阅读了README上的操作指引。 - [X] I want to train my custom dataset, and I have read the [tutorials for finetune on...

question

前提准备,根据文档中给出的模型链接下载Qwen2.5-VL-3B-Instruct-MNN模型和Qwen2-VL-2B-Instruct-MNN模型: 测试图片demo.jpeg ![Image](https://github.com/user-attachments/assets/8b957429-06e3-492d-9182-90b047441f60) 测试prompt: ![Image](https://github.com/user-attachments/assets/47f05647-0032-4ab7-a1da-9ebb31d5d7d5) config.json配置信息 { "llm_model": "llm.mnn", "llm_weight": "llm.mnn.weight", "backend_type": "opencl", "thread_num": 64, "precision": "low", "memory": "low" } 结果信息(多次执行,可以很明显的看出llm输出的结果不稳定,一会结果正确,一会结果不正确,但就推理而言这很异常): 第一次运行(结果正确): (base) yyds@yyds:~/Codes/work/MNN/build $ /home/yyds/Codes/work/MNN/build/llm_demo /media/yyds/cyy2t/move/Qwen2.5-VL-3B-Instruct-MNN/config.json /media/yyds/cyy2t/move/prompt.txt CPU Group:...

question

配置信息config.json { "llm_model": "llm.mnn", "llm_weight": "llm.mnn.weight", "backend_type": "opencl", "thread_num": 4, "precision": "low", "memory": "low", "mllm": { "backend_type": "cuda", "thread_num": 4, "precision": "low", "memory": "low" } } 错误: ![Image](https://github.com/user-attachments/assets/aa4b112e-5d17-4638-93a3-b34793a01203) 将Visual的推理后端改成cpu,也发生段错误 配置:...

question