Kruno Tomola Fabro
Kruno Tomola Fabro
> If the collection of ES instances have at least similar RTT characteristics, ARC will be able to manage them as though they were once instance. That sounds good enough....
Related to splitting out statistics gathering layer, maybe if split in a right way it could be put bellow distribution layer which would also be a step in solving this...
Related to ARC segment in https://github.com/vectordotdev/vector/pull/13236#pullrequestreview-1055604956 it kinda became a requirement to put ARC bellow distribution so that each endpoint is managed by its own ARC. To do it, dependency...
@KannarFr No, but it should be straight forward to do. Also it's possible `serde` is the crate responsible for parsing the string, or even maybe something further in the dependency...
### Description A dedicated source for collecting Kubernetes events. https://github.com/heptiolabs/eventrouter is a good example. ### Behavior Source will collect events from it's local Kubernetes cluster API server. ### Configuration ```...
## Proposal Add `parquet` codec and add support for it in the `aws_s3` sink. This can be achieved with [official Rust crate](https://github.com/apache/arrow-rs/tree/master/parquet), through `Serializer` construct, and tied with custom type...
Hey @fuchsnj Regarding 1. question, no, a Parquet batch can't be built from already encoded events. It's necessary to intercept them before that, or process them in a suitable way...
For point 1., while implementation of `parquet` would be sink specific encoder at the moment, it can be written to be generic in a sense that it can be reused...
@fuchsnj I found the `schema_requirement`. That seems like exactly what's needed. So we can go with that. Instead of determining schema during runtime, add option to specify schema for passing...
@spencergilbert I do. I was on vacation hence the silence. The 1. point raised by @fuchsnj remains uresolved. Simplified, there are two ways forward: 1. Implement `parquet` codec only for...