Nikhil

Results 16 comments of Nikhil

Wow, thanks for the quick reply! Here are the relevant code snippets: ``` import com.microsoft.hyperspace._ import com.microsoft.hyperspace.index._ val hyperspace = new Hyperspace(leftDf.sparkSession) val (leftTagged, additionalCols) = if (leftDf.schema.names.contains(Constants.TimeColumn)) { leftDf.withTimestampBasedPartition(Constants.TimePartitionColumn)...

I see. That makes sense. This is not an s3 issue then? If I restructure my code to save the relation to a table and execute the join, it should...

I was googling how to reveal the complete stack trace in proto-repl/atom. But I can't seem to find the complete stack trace I using the `execute sexp` to execute specific...

```scala def runModelInference(join: Join, inputs: Map[String, AnyRef]): Future[Map[String, AnyRef]] ``` This should be instead a batch / multi method ```scala def runModelInference(join: Join, inputs: Seq[Map[String, AnyRef]]): Future[Seq[Map[String, AnyRef]]] ```

Feedback: 1. Get resource allocation delay and incorporate into the UI 2. Another tab for tuning tips 3. Try to show memory consumption

> Does the PR mean we will break up the batch request into mini batch request and fetch them parallel? @nikhilsimha This basically only applies to spark offline jobs Yang.

> Will this guarantee at least one exception per each fetching? It doesn't, but you can set the rate to 1 if you want to guarantee.