Aaron Z.
Aaron Z.
as mentioned in #40 I need some help resolving why ["spark.executor.allowSparkContext"](https://github.com/joblib/joblib-spark/pull/41/files#diff-d0425054f455c253997fc682225ffb9dd0e49ae77b1ba0cb55c66a19090e3739R115) was needed on `pyspark==3.2.1` to create the local spark session to reuse for the tests
In many practical applications of using joblibspark I find I already have a `SparkSession` available and it doesn't make sense to me to close that and create a new one...
## Summary When loading a graph into _d3graph_ users must provide a dense adjacency matrix. This is inconvenient because most real-world graphs are sparse, and the dense representation is memory-ineffiecient....
based on discussion in #7334. This adds a backend protocol that devs can use to varying degrees to ensure their backend conforms to networkx's contract. ## How does this help...
In the [`SIGNDiffusion`](https://github.com/dmlc/dgl/blob/master/python/dgl/transforms/module.py#L1692) transform, the original [SIGN paper](https://arxiv.org/abs/2004.11198) is given for reference. I'm new to DGL, so my understanding could be wrong - but the implementation here seems partial. The...
workaround for #11933, but also an improved functionality in its own right. The user may set the keyword argument `keep_ignores` for `pytest.warns` to avoid catching warnings which were filtered out,...
# Bug report ### Bug description: the message and module fields in `warnings.filters` type-check as strings but can be of type `re.Pattern`: ```python import warnings from typing import TYPE_CHECKING warnings.filterwarnings("ignore",...