Chloe He
Chloe He
I might be missing some context/motivation, but I was wondering: would this not function effectively the same as `create_table()` with an in-memory `obj`?
I can't seem to be able to modify the description - can you add the window op GitHub issue? It's #8847 Going to add more some details here based on...
>Does not this refer to the same operation as windowing functions above? Yes it's the same. I grouped everything under op logic
Weekly update, 5/2/2024 - Revamping the window function implementation after much discussion. Working on redesigning the abstraction and API. - Proposed API for supporting connectors in PySpark #8984
Weekly update, 5/9/24 - PR #9131 implements a mode option so that we can support both batch and streaming workloads in the PySpark backend. It also implements the proposed API...
## Existing Ibis API In Flink and RW, we allow the user to pass all the configurations into `create_table`/`create_source`/`create_sink` and it gets compiled into a SQL query: ```sql # RW...
@gforsyth Thanks for the pointer! I dug a little deeper into what you had said above and I believe the issue arises when there are null values in a column....
I looked at streaming window aggregations for Flink, Spark, RW, Decodable, Materialize. Note that Spark also has something called window functions, which corresponds to the `over()` syntax in Ibis. It's...
We had a discussion around this. There are two options that I have considered: 1. Something along the lines of the `TimestampBucket` operation. I think Spark's `window()` function is sort...
@jcrist Makes sense, I think we're on the same page. Just wondering: what is the motivation behind matching the schema of the existing `window_by` operations? I'm not opposed to it...