cli icon indicating copy to clipboard operation
cli copied to clipboard

Ability to define stages for custom docker files in the nitric yaml

Open davemooreuws opened this issue 3 years ago • 8 comments

Describe the solution you'd like The ability to define multiple stages of my nitric application. Such as dev and prod. Add the ability to use a custom docker file for each stage and runtime.

Example yaml:

name: debug-ts
handlers:
  - functions/*.ts

run:
  stage: dev
deploy:
  stage: prod

stages:
  - stage: dev
    docker:
      - runtime: ts
        path: dev.Dockerfile
  - stage: prod
    docker:
      - runtime: ts
        path: prod.Dockerfile

Why is this needed? What challenge will it help you solve? Please describe. It is needed to be able to customize the docker image to meet a projects needs (for example a pdf creator that uses chromium) It can also be used for environment loading in the future.

davemooreuws avatar Mar 30 '22 05:03 davemooreuws

I think the "prod" stuff needs to be in the stack yaml (because the user will need to include the provider specific membrane). this bit https://github.com/nitrictech/cli/blob/develop/pkg/runtime/generate_test.go#L47-L48

nitric-prod.yaml

provider: aws
region: us-east-1
dockerfile: aws.Dockerfile

Also can you explain why we need a custom dockerfile for "nitric run"? is it the collection stage or the run part?

asalkeld avatar Mar 30 '22 05:03 asalkeld

It is the run part, collection is fine as it is. The reason is so the end user can build whatever they want in their image. For example installing chromium for using puppeteer.

The provider for prod sounds reasonable. Any reason why these configs aren't combined?

davemooreuws avatar Mar 30 '22 06:03 davemooreuws

Or maybe we add it as part of config as code, since this would add more config to the yaml.

davemooreuws avatar Mar 30 '22 06:03 davemooreuws

Yeah, since we don't currently have a per-function section in the config that might be best.. Allow something like

resources.WithBuildStage("custom", []string{"FROM builder AS custom", "RUN ..."})

asalkeld avatar Mar 30 '22 21:03 asalkeld

I think including this in the code could have some odd consequences down the road where the only way to evaluate the build requirements is to run the code but the only way to run the code is to build the code (you could end up in a nasty cycle here).

I think having a definition of runtimes and then being able to map those runtimes to subsets of functions would probably work better, rather than only being able to define a single runtime for all functions within a stack.

In nitric.yaml

runtimes:
  database:
    # base may not be necessary here
    base: ./docker/database/base.dockerfile
    # dev is used for hot-reloading
    dev: ./docker/database/dev.dockerfile
    # prod is used to final deployable artifacts
    prod: ./docker/database/prod.dockerfile
handlers:
  # Using default runtime
  - ./functions/base/*.ts
  # Using a custom runtime
  - runtime: database
    handlers: ./functions/database/*.ts

This would prevent users from being locked into a single runtime definition for a project

I've been thinking more about techniques we could use to perform membrane wrapping, without having to force the user to do anything special with their docker containers.

If inspect their built image first we might be able to break down the provided ENTRYPOINT and CMD and attempt to wrap the membrane around in another build stage:

FROM <users built image>

ADD <membrane>

ENTRYPOINT <membrane-path>
# Fold original command and entrypoint together
CMD <original image entrypoint + cmd>

Not sure if this is necessarily a good idea as it does obscure the implementation from the user

These ideas are very rough though, this issue needs a fair amount more consideration.

tjholm avatar Mar 31 '22 00:03 tjholm

how about no config, and it's just by convention

dev-<functionName>.dockerfile
prod-<functionName>.dockerfile

IMHO, the less config the better :-)

asalkeld avatar Mar 31 '22 00:03 asalkeld

how about no config, and it's just by convention

dev-<functionName>.dockerfile
prod-<functionName>.dockerfile

IMHO, the less config the better :-)

The only problem with this is there would be no sharing of dockerfiles between functions, unless by functionName you mean runtimeName, so users would be left with copying and pasting new dockerfiles which could lead to unintentional drift between them.

tjholm avatar Mar 31 '22 01:03 tjholm

I was thinking another approach would be a defining named 'runtimes' for lack of a better term, then allowing functions to opt into those runtimes. We could also provide a default value by convention

Stack files can define the specific implementation of a runtime profile for that stack:

nitric-dev.yaml

runtimes:
  default:
    memory: 128
  fast:
    dockerfile: ./docker/fast/Dockerfile
    memory: 1024

nitric-prod.yaml

runtimes:
  default:
    memory: 512
  fast:
    dockerfile: ./docker/fast/Dockerfile
    memory: 2048

Functions either get the default or request a specific profile:

default - no config:

import { api } from "@nitric/sdk";

const main = api('main');

main.get("/foo", async (ctx) => {
 // ....
});

request profile:

import { api, config } from "@nitric/sdk";

const main = api('main');

config({
  runtime: 'fast',
})

main.get("/foo", async (ctx) => {
 // ....
});

The benefits I see to this are that the definition of what the function needs remains with the function. Runtimes are reusable. As an alternative to globs in a config file, this approach remains simple even with a variety of profiles in a single src folder. One challenge would be handling missing profiles in a stack file.

jyecusch avatar May 04 '22 03:05 jyecusch