[one-cmd] discussion for desgining one-init like `dotnet` cli style
According to @lemmaa, This discussion of the point is to support templates with a defined unit by one-init. (eg. backend, model, ...)
https://github.com/Samsung/ONE/pull/9259/files#r897528685
@YongseopKim , how about reference the dotnet CLI style?
We could try using the backend type as the template name. For example onecc init onert or onecc init tv2, etc. Depending on the type of each template, we will be able to take the default value of whether to enable/disable each step.
sjlee@jupiter:~ $ dotnet new --list
These templates matched your input:
Template Name Short Name Language Tags
-------------------------------------------- -------------- ---------- --------------------------
ASP.NET Core Empty web [C#],F# Web/Empty
ASP.NET Core gRPC Service grpc [C#] Web/gRPC
ASP.NET Core Web API webapi [C#],F# Web/WebAPI
ASP.NET Core Web App webapp,razor [C#] Web/MVC/Razor Pages
ASP.NET Core Web App (Model-View-Controller) mvc [C#],F# Web/MVC
ASP.NET Core with Angular angular [C#] Web/MVC/SPA
ASP.NET Core with React.js react [C#] Web/MVC/SPA
Blazor Server App blazorserver [C#] Web/Blazor
Blazor WebAssembly App blazorwasm [C#] Web/Blazor/WebAssembly/PWA
Class Library classlib [C#],F#,VB Common/Library
Console App console [C#],F#,VB Common/Console
dotnet gitignore file gitignore Config
Dotnet local tool manifest file tool-manifest Config
EditorConfig file editorconfig Config
global.json file globaljson Config
MSTest Test Project mstest [C#],F#,VB Test/MSTest
MVC ViewImports viewimports [C#] Web/ASP.NET
MVC ViewStart viewstart [C#] Web/ASP.NET
NuGet Config nugetconfig Config
NUnit 3 Test Item nunit-test [C#],F#,VB Test/NUnit
NUnit 3 Test Project nunit [C#],F#,VB Test/NUnit
Protocol Buffer File proto Web/gRPC
Razor Class Library razorclasslib [C#] Web/Razor/Library
Razor Component razorcomponent [C#] Web/ASP.NET
Razor Page page [C#] Web/ASP.NET
Solution File sln Solution
Web Config webconfig Config
Worker Service worker [C#],F# Common/Worker/Web
xUnit Test Project xunit [C#],F#,VB Test/xUnit
https://github.com/Samsung/ONE/issues/9260#issuecomment-1155988924
I suggest running multiple templates according to the input model type or the target backend type. That simplifies the problem, and the maintenance burden will be less. See also https://github.com/Samsung/ONE/pull/9259#discussion_r897528685 .
I imagine
$ onecc init -h
usage: one-init [-h] [-v] [--backend_template BACKEND_TEMPLATE] [--model_path MODEL_PATH]
Backend Template Candidates: (default), tv2, onert
and then, ONE has three templates for default(onecc.template.cfg), tv2 and onert.
The example for tv2. It would be copied and fixed by one-init.
- Please see the sections
[one-optimize]and[one-codegen]
$ onecc init --type tflite --backend tv2 model_name.ext
$ cat model_name.cfg
; To activate a step (or task),
; set True for the step in [onecc] section and fill options in the corresponding section
[onecc]
; neural network model to circle
one-import-tf=False
one-import-tflite=True
one-import-bcq=False
one-import-onnx=False
; circle to circle with optimization
one-optimize=True
; circle to circle with quantization
one-quantize=True
; partition circle
one-partition=False
; package circle and metadata into nnpackage
one-pack=False
; generate code for backend
one-codegen=True
; profile
one-profile=False
[one-import-tflite]
# mandatory
; tflite file
input_path=model_name.ext
; circle file
output_path=model_name.circle
[one-optimize]
# mandatory
; circle file
input_path=model_name.circle
; circle file
output_path=model_name.opt.circle
# optimization options for backend
O=TVN
[one-quantize]
# mandatory
; circle file
input_path=model_name.opt.circle
; circle file
output_path=model_name.q8.circle
# optional arguments for quantization
; input data file (if not given, random data will be used for calibration)
input_data=
; h5/hdf5(default), list/filelist, or dir/directory
input_data_format=
; dtype of quantized model (uint8(default), int16)
quantized_dtype=
; granularity of quantization (layer(default), channel)
granularity=
; dtype of model's input (uint8, int16, float32). Same with quantized_dtype by default.
input_type=
; dtype of model's output (uint8, int16, float32). Same with quantized_dtype by default.
output_type=
[one-codegen]
# mandatory
backend=tvn
; commands for tvn backend
; -o output_file [--DSP-quota 64k|...] [--ils sequential|...] input_file
command=-o model_name.tvn model_name.q8.circle
tvn.template.cfg
; To activate a step (or task),
; set True for the step in [onecc] section and fill options in the corresponding section
[onecc]
; neural network model to circle
one-import-tf=False
one-import-tflite=False
one-import-bcq=False
one-import-onnx=False
; circle to circle with optimization
one-optimize=True
; circle to circle with quantization
one-quantize=True
; partition circle
one-partition=False
; package circle and metadata into nnpackage
one-pack=False
; generate code for backend
one-codegen=True
; profile
one-profile=False
[one-import-tflite]
# mandatory
; tflite file
input_path={model_name}
; circle file
output_path={circle_model_name}
[one-optimize]
# mandatory
; circle file
input_path={circle_model_name}
; circle file
output_path={opt_circle_model_name}
# optimization options for backend
O={backend_name}
[one-quantize]
# mandatory
; circle file
input_path={opt_circle_model_name}
; circle file
output_path={quantize_model_name}
# optional arguments for quantization
; input data file (if not given, random data will be used for calibration)
input_data=
; h5/hdf5(default), list/filelist, or dir/directory
input_data_format=
; dtype of quantized model (uint8(default), int16)
quantized_dtype=
; granularity of quantization (layer(default), channel)
granularity=
; dtype of model's input (uint8, int16, float32). Same with quantized_dtype by default.
input_type=
; dtype of model's output (uint8, int16, float32). Same with quantized_dtype by default.
output_type=
[one-codegen]
# mandatory
backend={backend_name}
; commands for {backend_name} backend
; {backend_command_comment}
command={backend_command}
Questions
- If
one-initsupports backend templates,one-initdoesn't have to support model types?- IHMO,
one-initcan also support model types
- IHMO,
- If
one-initsupports backend templates, the templates seem to maintain in ONE. triv2 backend template would maintain in ONE. Is it okay?
According to @lemmaa, This discussion of the point is to support templates backend-by-backend by
one-init.
I did not limit this to by the backend. It was just an example, and it was just a suggestion to provide a template in various forms like dotnet. What I wanted to achieve through it was
- More intuitive use from the user's point of view
- More convenient maintenance from the developer's point of view
Classification according to the backend may be one way, and another way may be based on the input model type. However, it is also undesirable to have too many combinations based on too many criteria.
Let's gather ideas. :)