ingbeeedd
ingbeeedd
h, s, v = cv2.split(hsv)
This is the result of running with the original project when training with coco dataset. 
There's a problem in code
Is it work?
@DeepKnowledge1 @okokchoi I think it's pretty much the same. As well as the size of the feature map, below codes are heavy ```python dist = SSD.cdist(embedding_vectors[:,:,i], mean[None, :], metric='mahalanobis', VI=conv_inv)...
Improved 3.5 times through real process multiprocessing
@fryegg @GreatScherzo I have written as follows. ```python manager = multiprocessing.Manager() cpu_core = 8 dist_list = manager.list() for number in range(cpu_core): dist_list.append(manager.list()) def calculate_distance(number, start, end, train_outputs, embedding_vectors): global dist_list...
@fryegg @GreatScherzo The GPU calculated Mahalnobis distance, and it's 24 times better than before. (cpu parallel processing 3.5 times) so, cpu parallelism has been improved by 6 times.
@fryegg The code is being refreshed. I'll leave a comment as soon as it's organized.
@GreatScherzo @fryegg @DeepKnowledge1 @okokchoi @xiahaifeng1995 @prob1995 @sangkyuleeKOR https://github.com/ingbeeedd/PaDiM-EfficientNet I code up :)