kensun0
kensun0
This https://discuss.pytorch.org/t/extra-10gb-memory-on-gpu-0-in-ddp-tutorial/118113 solved the problem for me torch.cuda.set_device(rank) torch.cuda.empty_cache()
lines 279-346 is a non-optimization version for normal test. In this version, one sample need go through all trees, whether positve or negtive. In fact, neg sample should not go...
to use VS, you need to create a project by yourself, the code only depend on opencv. neg samples can be find here http://www.vision.caltech.edu/feifeili/Datasets.htm , 101 object; my pos sample...
I do not implement train and test as functions. But you can find the similar codes in main.cpp.
1. according to LBFRegressor.cpp, line 887. binfeatures=DeriveBinaryFeat2(RandomForest_[stage],images,image_index,current_shapes,bounding_boxs,face,score,fcount,fface); 2. This version is used for training. i have another implementation which has been optimized to test, it coulde be faster than the...
yes, the one you mentiond never be called. maybe i should delete it.
sorry, it is a long time since i finish it, i miss the model file. you can train one with your dataset.
i do not remeber, you can veirify it by yourself.
不清楚,估计2112加载的图像有问题,是不是不存在的图像。你得自己查了
1. 存在溢出可能,但是一般不会溢出。因为如果是正样本,得分是大负数的话才有可能溢出。但是大负数得分的正样本权重变大,导致下一颗树计算的新得分会趋近大正数。而得分是大正数的的正样本权重趋近与0,是不会溢出的。负样本同理,可以说是一个可以稳定学习的系统。 2. 我设置的召回率始终为100%。原文说预设的,我不清楚怎么设置,但是50颗树级联的话,如果每颗的召回率是0.995,总的召回0.995^50,就太低了,所以我就设置成100%了。