brightcoder01

Results 32 comments of brightcoder01

> 3. We need a local mode to simplify the development and debugging. As a compiler, SQLFlow local mode can directly generate a Python program with several step functions and...

Add one more case of TO RUN statement, no detailed logs for the runnable program: ```TXT 2020/09/09 06:36:45 SQLFlow Step Execute: SELECT * FROM iris.train TO RUN sqlflow/runnable:v0.0.1 CMD "binning.py",...

> 3\. We have a 2-pass compilation architecture: 1) **the first pass** generates workflow yaml and submit the yaml; 2)**the 2nd pass** is in _step execution_, it use `step -e`...

> * add `codegen/pai` to generate PAI submitter program. During workflow compilation, for `TO RUN` statement, we will have the flexibility to generate different command line call for the step...

For the first step of all the statements `SELECT Data from source table into Temp table`, the behaviors are different for various DBMS in the current status: - MySQL/Hive: We...

> * Should we differentiate the step image by Platforms? > * Should model / function images from model zoo be platform agnostic? For these two questions above, the discussion...

> I used https://logomaker.com to created some prototype logos. Please vote and I will purchase the chosen one. > > 1. ![03AC9F02-BBBB-4685-B209-F0E3BB65C15B](https://user-images.githubusercontent.com/1548775/85190678-71e26d80-b26f-11ea-9bd1-32df22afdc79.png) SQLFlow is a kind of flow. It is...

> 1. We can exclude Kubernetes from the framework, just generate a python file to run docker at local Do we need Minikube here? > 1. We need to provide...

Opened questions on PyTorch Forum to track: [What’s the official high performance serving solution for PyTorch](https://discuss.pytorch.org/t/whats-the-official-high-performance-serving-solution-for-pytorch/91430) [How to keep consistency for data preprocessing between training and serving](https://discuss.pytorch.org/t/how-to-keep-consistency-for-data-preprocess-between-pytorch-model-training-and-serving/91434)

The serving solution using LibTorch is preferred. It's more PyTorch native.