Ralf Rettig
Ralf Rettig
`DistributedNSGAII` works well for me with `dask`. However, I was fully unaware of this possibility and had previously hacked my own solution to get `jmetalpy` working with `dask`. I think...
Ok, a workaround is to use `extraPodConfig`: ``` dask-gateway: gateway: backend: scheduler: extraPodConfig: imagePullSecrets: - name: default-secret ``` In similar way this is to be done for the worker.
Thanks for the suggestion, but I think I am not deeply enough into Kubernetes to fix this myself.
@TomAugspurger right.
It could be that the solutions discussed in this issue can help you: https://github.com/py4j/py4j/issues/320
I cannot reproduce the issue anymore in my application. Maybe due to changes to `django-plotly-dash` in the meanwhile.
Did you have any progress on this pull request in recent time? This option would be very helpful as well for the `GCPCluster`. Most machine types on GCP have more...
I tested this new option `num_workers_per_vm` and it does not seem to work for `GCPCluster`. This code reproduces the problem: ```python import time from dask_cloudprovider.gcp import GCPCluster from dask.distributed import...
Maybe, but I have an application that is not thread-safe due to the underlying FORTRAN code being used.
According to the documentation, this option does not exist for workers, only threads: https://distributed.dask.org/en/latest/worker.html#distributed.worker.Worker Trying it also fails.