Mayank Asthana
Mayank Asthana
Try the Dataframe API. You won't be able to import anything from this library into pyspark since there is no python library to import. The Dataframe API or Spark SQL...
Maybe we can reuse the [spark.redis.max.pipeline.size](https://github.com/RedisLabs/spark-redis/blob/47b11ce188ad478d602f7bfb9f8c1000dfd3cb04/src/main/scala/com/redislabs/provider/redis/RedisConfig.scala#L100) property for the `mget` size, since in this context they are equivalent.
So I just found out that all keys asked in one `MGET` should be from the same slot and it is not enough that the keys be from the same...
The `fromRedisHash` also accepts an `Array[String]` of keys so you could do this - ``` val fromRedisDf = sc.fromRedisHash(Array("key1", "key2")).toDF("key", "value1", "value2") ``` This does all filtering on the Redis...