Joel Mathew Cherian
Joel Mathew Cherian
A bit early to start looking into this, a better time would be after finalizing on the requirements or we hit a roadblock and this becomes absolutely necessary. But out...
Found some details on possible parsers for python [here](https://tomassetti.me/parsing-in-python/). However building a custom parser would be arduous and difficult to maintain. Refering to the _parsing python in python_ sub-topic of...
Realized that seastar is performing just in-time compilation (JIT). This is when compilation happens during runtime, instead of ahead of time. Was looking into `numba`, which is a JIT compiler...
Found the numba compiler portion: https://numba.pydata.org/numba-doc/latest/developer/architecture.html It seems like they are converting to bytecode which is quite complicated and best to avoid at the moment.
Since type inference is one aspect of monkey-patching being used, maybe we can use something like https://mypy.readthedocs.io/en/stable/index.html to help with inference.
Starting with the preprocess script the current time taken is as follows (noting that I've already done some optimization here for loading the file). This is from the `sx-stackoverflow` set...
Okay so I have identified a few repeated operations in the data loading pipeline. Right now the flow is ``` Dataset (edge list) ----(Preprocessor)--> JsonData (add/delete list) ----(Data loader)--> GraphObj...
I'm running tests on `benchmarking/dataset/preprocessing/preprocess_temporal_data.py`. I'll push the changes I made soon. I'm still running a few tests myself.
Trying to run `train.py` by using `dgl.RelGraphConv` to see if the train file can execute without `dgl-hack`. Turns out there is a custom function `add_edges_with_type` that was implemented. Additionally had...
Fixed the DGL code on the `aifb` dataset. This dataset does not have node features so instead we label each node and then assign it a random feature from a...