Daniel Rodríguez L.

Results 11 comments of Daniel Rodríguez L.

I have no idea, but I think it is totally possible, because if you take a look at the chart in the readme, the last score is like 160

If you do `ls -l` you can see the date when the file was created, so after using `merge_bn.py`, your `MobileNetSSD_deploy.caffemodel` should have the date and time when the script...

I need to fix some things that I didn't think about before

Ok, I found that (at least for this example) when splitting the first 8 layers between different machines it does not work because of the hooks triggered on the first...

As far as I know, no library provide automatic tools for model parallelism (FastAI, PyTorch, TensorFlow...), it's up to you to divide and send the appropriate layers to each GPU...

I think I can arrange a minimal example for DP and MP (this one when the GPUs are on the same node) . On the other hand, distributed MP require...

This example would be for data parallelism ```python import torch import numpy as np import segmentation_models_pytorch as smp import torch.distributed as dist from torch.nn.parallel import DistributedDataParallel as DDP # For...

Btw, I two things that might come handy when testing: In my case, when running MP with RPC I had to manually set the network interface of each machine putting...

@mikepparks I'm having the same problem, so I can give you the info: - GP2040 v0.7.12-Beta2 - pico-sdk 2.2.0 On the other hand, using GP2040 v0.7.12-Beta2 or GP2040 v0.7.11 with...

I managed to compile it (idk if it works correctly) The main problems seems to be that pico-sdk uses the latest version of mbedTLS which is the 3.x and GP2040...