| Avassa application specification | Ref. |
---|
Essentials |
---|
services: <name>: image: hello-world
| Image name without host prefix can be used if the default registry is configured. Fully prefixed image name is
preferable: services: containers: image: registry-1.docker.io/library/hello-world
| 🔗 |
docker run hello-world /hello
services: <name>: command: - /hello
| services: containers: cmd: - /hello
| 🔗 |
docker run --entrypoint /hello
services: <name>: entrypoint: - /hello
| services: containers: entrypoint: - /hello
| 🔗 |
services: <name>: init: true
| On by default. To disable: services: containers: no-builtin-init: true
| 🔗 |
docker run --env VAR1=value1
services: <name>: environment: VAR1: value1
| services: containers: env: VAR1: value1
| 🔗 |
docker run --user 1000:1000
services: <name>: user: 1000:1000
| Note: only integer UID and GID allowed, no username from image resolution. services: containers: user: 1000:1000
| 🔗 |
Network |
---|
Default behaviour is to allow all outbound connections. | By default all outbound connections from a container are forbidden. To
allow all outbound connections: services: network: outbound-access: allow-all: true
| 🔗 |
docker run --network host
services: <name>: network_mode: host
| services: network: host: true
| 🔗 |
docker run --network container:other --name this
services: this: network_mode: service:other
| Containers running in the same network namespace should be members of the
same service. services: containers: - name: this - name: other
| |
docker run --publish 80:80/tcp
services: <name>: publish: - 80:80/tcp
| services: network: ingress-ip-per-instance: protocols: name: tcp port-ranges: "80"
| 🔗 |
Volumes and mounts |
---|
docker run --volume data:/data
or docker run --mount \ type=volume,source=data,target=/data
services: <name>: volumes: - type: volume source: data target: /data
| There are two different types of volumes: persistent-volume and
ephemeral-volume , with different lifecycles. Learn more from
the Application persistent storage document. services: volumes: - name: data persistent-volume: size: 10 GiB containers: mounts: - volume-name: data mount-path: /data
| 🔗 🔗 |
docker run --mount \ type=bind,source=/var/run/dbus/system_bus_socket,target=/var/run/dbus/system_bus_socket
services: <name>: volumes: - type: bind source: /var/run/dbus/system_bus_socket target: /var/run/dbus/system_bus_socket
| services: volumes: - name: system_dbus_socket system-volume: reference: system_dbus containers: mounts: - volume-name: system_dbus mount-path: /var/run/dbus/system_bus_socket
The system volume must be defined by the site provider either in the global system settings or in the site settings: system-volumes: - name: system_dbus path: /var/run/dbus/system_bus_socket
| 🔗 🔗 |
Devices and GPU |
---|
docker run --device /dev/sdc
services: <name>: devices: - /dev/sdc:/dev/sdc
| services: containers: devices: device-labels: - sdc
The device label must be defined by the site provider either in the global system settings or in the site settings: device-labels: - label: sdc udev-patterns: - KERNEL=="sdc"
| 🔗 |
docker run --gpus '"device=0"'
or docker run --device nvidia.com/gpu=0
services: <name>: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu]
or services: <name>: devices: - nvidia.com/gpu=0
| services: gpu: labels: - nvidia number-gpus: 1
The GPU label must be defined by the site provider either in the global system settings or in the site settings: gpu-labels: - label: nvidia gpu-patterns: - vendor == "NVIDIA"
| 🔗 |
Common way to pass Intel GPUs into the container: docker run --device /dev/dri
services: <name>: devices: - /dev/dri:/dev/dri
| services: gpu: labels: - intel
The GPU label must be defined by the site provider either in the global system settings or in the site settings: gpu-labels: - label: intel gpu-patterns: - vendor == "Intel"
| 🔗 |
Resources |
---|
services: <name>: cpus: 1.5
or services: <name>: resources: limits: cpus: "1.5"
| services: containers: cpus: 1.5
| 🔗 |
docker run --cpu-shares 1024
services: <name>: cpu_shares: 1024
| services: containers: cpu-shares: 1024
| 🔗 |
services: <name>: mem_limit: 2GB
or services: <name>: resources: limits: memory: 2GB
| services: containers: memory: 2 GiB
| 🔗 |
services: <name>: read_only: true
| services: containers: container-layer-size: 0
| 🔗 |
docker run --storage-opt size=10G
services: <name>: storage_opt: size: 10G
| See prerequisites services: containers: container-layer-size: 10 GiB
| 🔗 |
User and PID namespaces |
---|
services: <name>: userns_mode: host
| services: containers: user-namespace: host: true
| 🔗 |
docker run --pid container:other --name this
services: this: pid: service:other
| services: share-pid-namespace: true containers: - name: this - name: other
| 🔗 |
Security features |
---|
docker run --cap-add SYS_ADMIN
services: <name>: cap_add: - SYS_ADMIN
| services: containers: additional-capabilities: - sys-admin
| 🔗 |
docker run --security-opt \ apparmor=unconfined
services: <name>: security_opt: - apparmor=unconfined
| services: containers: security: apparmor: disabled: true
| 🔗 |
docker run --security-opt \ label=disable
services: <name>: security_opt: - label=disable
| services: containers: security: selinux: disabled: true
| 🔗 |
services: <name>: privileged: true
| There is no support for privileged container mode in the Avassa system. A
better approach from the security point of view is to identify the
individual privileges the application is lacking to perform its task and
grant only these privileges. To identify the privileges the application
might need think about the following:
- does the container need additional capabilities, such as
SYS_ADMIN
or NET_ADMIN ?
- which devices does the container require for its operation?
- are there any system paths masked by the container engine that need
to be accessed by the application?
If the answer to these questions is not immediately clear, one possible
approach to debug an application it could be a good idea to look at the
strace tool output when run inside the container and identify the calls
the application fails to make. It may hint at the resources the
application is lacking access to.
| |
Config and secrets |
---|
docker service create --config \ source=cfg.yaml,target=/etc/cfg.yaml,mode=0440
services: <name>: configs: - source: cfg.yaml target: /etc/cfg.yaml mode: 0440
| services: volumes: - name: cfg config-map: items: - name: cfg.yaml data: | hello: world file-mode: "440" containers: mounts: - volume-name: cfg files: - name: cfg.yaml mount-path: /etc/cfg.yaml
| 🔗 🔗 |
docker service create \ --secret password
services: <name>: secrets: - password
| In the Avassa system secrets are stored in strongbox vaults. In order to mount a vault secret: services: volumes: - name: secret vault-secret: vault: secret secret: userpass file-mode: "400" containers: mounts: - volume-name: secret files: - name: password mount-path: /run/secrets/password
| 🔗 🔗 |
Replicas and placement |
---|
docker service create \ --replicas 2
services: <name>: scale: 2
or services: <name>: deploy: mode: replicated replicas: 2
| services: mode: replicated replicas: 2
| 🔗 🔗 |
docker service create \ --mode global
services: <name>: deploy: mode: global
| services: mode: one-per-matching-host
| 🔗 |
docker service create --constraint \ node.labels.security==high
services: <name>: deploy: placement: constraints: - node.labels.security==high
| services: placement: match-host-labels: security = high
| 🔗 |