Anindyadeep
Anindyadeep
This is the part 2 of this blog post series, I intended to start. Here I will cover the topic `Different types of graphs in Graph ML and understanding different...
In the readme, it is mentioned that after doing all the preprocessing we will be left with 23M "good" links. Now 1. is my assumption correct that the 23M good...
Ibis is doing some incredible work by integrating substrait for generating substrait plan of the user's query to support cross DB operations in python. Suppose we have a table like...
Add a model loading time for each benchmarks so that we can understand how much time it requires to fit weight into gpus.
There are two ways of doing this, 1. either put a decorator on the base class function or inside each of the method 2. Or use within the method using...
JAX
Since, JAX is getting very much popular. So, it would be awesome, if we can also benchmark the performance of LLama 2 written in JAX. Here is the [implementation](https://github.com/ayaka14732/llama-2-jax)
For each of the benchmark, check if Flash Attention v2 is present or not, if it is present, then use it and mention it on the benchmark specific readme too.
Consider this edge case: 1. A user does a benchmark with the CPU first 2. Now they want to do it in Mac or CUDA. But suppose CUDA/Mac has a...
MLX is the array framework for Apple Silicon, and highly optimized for Apple's architecture and do compute for ML. See more [here](https://github.com/ml-explore/mlx)
Hello, Thanks for the awesome implementation. However, I am running into several problems, and am not able to run the model successfully. Here is my full reproduction procedure and the...