distributed
distributed copied to clipboard
A distributed task scheduler for Dask
``` ________________ ERROR at teardown of test_scheduler_port_zero _________________ cleanup = None @pytest.fixture defloop(cleanup): > with check_instances(): distributed/utils_test.py:148: _ _ _ _ _ _ _ _ _ _ _ _ _...
The test failed, then there was an error during teardown as well ``` ____________________ ERROR at teardown of test_no_dashboard ____________________ cleanup = None @pytest.fixture defloop(cleanup): > with check_instances(): distributed/utils_test.py:148: _...
Since d59500ea97c02753eac9d42951d9c4a5d4f17685 (#6658), launching workers can fail on shared systems if someone else happens to be running at the same time (since they will have created `/tmp/dask-worker-space` and we won't...
This is unexpected, I tried pretty hard to make sure this test wouldn't be flaky. ``` ______________________________ test_popen_timeout ______________________________ capsys = deftest_popen_timeout(capsys): > with pytest.raises(subprocess.TimeoutExpired): E Failed: DID NOT RAISE...
Haven't seen this fail before, but it looks like an instance of https://github.com/dask/distributed/issues/5186 Key part is just that a worker fails to come up and we see `OSError: [Errno 98]...
Seeing a [CI failure]( https://github.com/dask/distributed/runs/6358362779?check_suite_focus=true ) with `test_get_task_stream_save`.
With #6270 we now have a first set of HTTP routes that are exposed on the scheduler. Currently there are two modes for the `/api/v1/retire_workers` route: 1. Retire n workers...
I just helped a Dask user get the Dashboard running during the SciPy sprints. It was pretty painful and had a bunch of failures along the way that made it...
I get the following error from main. This doesn't appear to happen when using multiple processes. ```python from dask.distributed import Client client = Client(n_workers=4, processes=False) import dask df = dask.datasets.timeseries()...
Might be a recent regression: ``` __________________________ test_scatter_compute_lose ___________________________ c = s = a = b = @gen_cluster(client=True) asyncdeftest_scatter_compute_lose(c, s, a, b): [x] = await c.scatter([[1, 2, 3, 4]], workers=a.address)...