Write an application specification
This tutorial guides you through the steps to write an application specification. It reuses the specifications introduced in the more advanced deployment tutorial.
In order to be able to run through the steps of these tutorials in a live Avassa environment, sign up for a free trial and we will set you up with a running system.
The reference section includes a number of application examples.
Prerequisites
This tutorial assumes you have supctl
installed and that you have
logged in supctl to your Control Tower (See Create your first Avassa environment).
The Application
The application we are going to build is the full version of the theater room manager application that you may recognize from the more advanced tutorial. The final version will look like this:
name: theater-room-manager
version: "1.0"
services:
- name: theater-operations
variables:
- name: OPERATIONS_USERNAME
value-from-vault-secret:
vault: operations
secret: credentials
key: username
volumes:
- name: credentials
vault-secret:
vault: operations
secret: credentials
containers:
- name: projector-operations
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/projector-operations:v1.0"
- name: digital-assets-manager
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/digital-assets-manager:v1.0"
env:
USERNAME: ${OPERATIONS_USERNAME}
mounts:
- volume-name: credentials
mount-path: /credentials
mode: replicated
replicas: 1
placement:
preferred-anti-affinity:
services: [ curtain-controller ]
- name: curtain-controller
containers:
- name: curtain-controller
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/curtain-controller:v1.0"
network:
ingress-ip-per-instance:
protocols:
- name: tcp
port-ranges: "80,443"
mode: replicated
replicas: 1
placement:
preferred-anti-affinity:
services: [ theater-operations ]
Details
The application consists of three images:
registry.gitlab.com/registry.gitlab.com/avassa-public/movie-theaters-demo/projector-operations:v1.0
registry.gitlab.com/registry.gitlab.com/avassa-public/movie-theaters-demo/digital-assets-manager:v1.0
registry.gitlab.com/registry.gitlab.com/avassa-public/movie-theaters-demo/curtain-controller:v1.0
The projector operations and the digial assets manager have to run together, hence we put them in the same service.
The curtain controller should be run on a separate host if the movie theater has multiple hosts.
The digital assets manager needs to have access to some credentials.
The username is read from the environment in a variable called
USERNAME
and the password is read from a file called /credentials/password
.
Further, the curtain controller needs to have an IP address allocated to make it accessible from the local network. The curtain controller listens on TCP ports 80 and 443.
Setting up the Vault
Then we create a vault called operations
. For simplicity, we also
instruct the system to distribute the secrets in this vault to all sites. For more details, see here.
supctl create strongbox vaults <<EOF
name: operations
distribute:
to: all
EOF
Now we create a key/value map called credentials
in this vault. We allow any image to access this key/value map.
supctl create strongbox vaults operations secrets <<EOF
name: credentials
allow-image-access:
- "*"
data:
username: my-username
password: my-password
EOF
Basic application structure
Please open your favorite editor and start adding the following content.
name: theater-room-manager
version: "1.0"
services:
- name: theater-operations
containers:
- name: projector-operations
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/projector-operations:v1.0"
- name: digital-assets-manager
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/digital-assets-manager:v1.0"
mode: replicated
replicas: 1
- name: curtain-controller
containers:
- name: curtain-controller
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/curtain-controller:v1.0"
mode: replicated
replicas: 1
This is the minimal scaffolding needed to get the structure correct.
We have created two services, this allows us to specify affinity between those services.
mode: replicated
and replicas: 1
, tells the system we want to run one instance of each service per site.
At this point we are still missing credentials for the application.
Adding credentials
This assumes there exists a vault named operations
with two key values in a key/value map called credentials
:
username
=some username
password
=some password
First we add the username by mapping operations/credentials/username
to an environment variable.
name: theater-room-manager
version: "1.0"
services:
- name: theater-operations
variables:
- name: OPERATIONS_USERNAME
value-from-vault-secret:
vault: operations
secret: credentials
key: username
containers:
- name: projector-operations
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/projector-operations:v1.0"
- name: digital-assets-manager
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/digital-assets-manager:v1.0"
env:
USERNAME: ${OPERATIONS_USERNAME}
mode: replicated
replicas: 1
- name: curtain-controller
containers:
- name: curtain-controller
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/curtain-controller:v1.0"
mode: replicated
replicas: 1
First a service local variable called OPERATIONS_USERNAME
is declared as a vault map. Then this variable is set as the environment variable USERNAME
.
Next let's mount operations/credentials/password
as a file in the containers file system.
name: theater-room-manager
version: "1.0"
services:
- name: theater-operations
variables:
- name: OPERATIONS_USERNAME
value-from-vault-secret:
vault: operations
secret: credentials
key: username
volumes:
- name: credentials
vault-secret:
vault: operations
secret: credentials
containers:
- name: projector-operations
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/projector-operations:v1.0"
- name: digital-assets-manager
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/digital-assets-manager:v1.0"
env:
USERNAME: ${OPERATIONS_USERNAME}
mounts:
- volume-name: credentials
mount-path: /credentials
mode: replicated
replicas: 1
- name: curtain-controller
containers:
- name: curtain-controller
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/curtain-controller:v1.0"
mode: replicated
replicas: 1
first a volume named credentials
is created as a reference to a vault
key/value map. for the digital-assets-manager
container, we mount this volume
into the /credentials
directory. As the key/value map contains a key called
password
, a file named password
is then created in the /credentials
directory (Since we
have a username
key in the vault, the username will also be stored as
/credentials/username
).
Network
To give the curtain controller an IP address that is externally available, we instruct the system to allocate an ingress address:
name: theater-room-manager
version: "1.0"
services:
- name: theater-operations
variables:
- name: OPERATIONS_USERNAME
value-from-vault-secret:
vault: operations
secret: credentials
key: username
volumes:
- name: credentials
vault-secret:
vault: operations
secret: credentials
containers:
- name: projector-operations
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/projector-operations:v1.0"
- name: digital-assets-manager
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/digital-assets-manager:v1.0"
env:
USERNAME: ${OPERATIONS_USERNAME}
mounts:
- volume-name: credentials
mount-path: /credentials
mode: replicated
replicas: 1
- name: curtain-controller
containers:
- name: curtain-controller
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/curtain-controller:v1.0"
network:
ingress-ip-per-instance:
protocols:
- name: tcp
port-ranges: "80,443"
mode: replicated
replicas: 1
Here we have added an ingress to the curtain-controller service and whitelisted TCP port 80 and 443.
Affinity
Last we want to make sure the theater-operations
and curtain-controller
run on separate hosts (if possible) in each theater.
name: theater-room-manager
version: "1.0"
services:
- name: theater-operations
variables:
- name: OPERATIONS_USERNAME
value-from-vault-secret:
vault: operations
secret: credentials
key: username
volumes:
- name: credentials
vault-secret:
vault: operations
secret: credentials
containers:
- name: projector-operations
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/projector-operations:v1.0"
- name: digital-assets-manager
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/digital-assets-manager:v1.0"
env:
USERNAME: ${OPERATIONS_USERNAME}
mounts:
- volume-name: credentials
mount-path: /credentials
mode: replicated
replicas: 1
placement:
preferred-anti-affinity:
services: [ curtain-controller ]
- name: curtain-controller
containers:
- name: curtain-controller
image: "registry.gitlab.com/avassa-public/movie-theaters-demo/curtain-controller:v1.0"
network:
ingress-ip-per-instance:
protocols:
- name: tcp
port-ranges: "80,443"
mode: replicated
replicas: 1
placement:
preferred-anti-affinity:
services: [ theater-operations ]
By adding the preferred-anti-affinity
statements, the system will try to schedule the services on different hosts.
Registering the application in Control Tower
Assuming you called the YAML-file application.yaml
, you can now send it to the Control Tower:
supctl create applications < application.yaml
After successfully loading the specification, we can check the state of the application
supctl show applications theater-room-manager
Output will look something like this
name: theater-room-manager
version: "1.0"
services:
- name: theater-operations
mode: replicated
replicas: 1
volumes:
- name: credentials
vault-secret:
vault: operations
secret: credentials
file-mode: "400"
file-ownership: 0:0
share-pid-namespace: false
variables:
- name: OPERATIONS_USERNAME
value-from-vault-secret:
vault: operations
secret: credentials
key: username
containers:
- name: projector-operations
image: registry.gitlab.com/avassa-public/movie-theaters-demo/projector-operations:v1.0
image-status:
status: present
digest: sha256:166fb4808d6844b77dc6
on-mounted-file-change:
restart: true
- name: digital-assets-manager
image: registry.gitlab.com/avassa-public/movie-theaters-demo/digital-assets-manager:v1.0
image-status:
status: present
digest: sha256:88cd634d04d2e9bf585f
mounts:
- volume-name: credentials
mount-path: /credentials
env:
USERNAME: ${OPERATIONS_USERNAME}
on-mounted-file-change:
restart: true
- name: curtain-controller
mode: replicated
replicas: 1
share-pid-namespace: false
containers:
- name: curtain-controller
image: registry.gitlab.com/avassa-public/movie-theaters-demo/curtain-controller:v1.0
image-status:
status: present
digest: sha256:cd9f8edadf866a2013a2
on-mounted-file-change:
restart: true
network:
ingress-ip-per-instance:
protocols:
- name: tcp
port-ranges: 80,443
modified-time: 2022-03-14T07:41:16Z
locally-deployed: false
Here we can see that the system has successfully pulled the images, look for image-status
above (status: present
indicates the system has downloaded the images).
The next step would be to deploy this application, this could for example be done by
supctl create application-deployments <<EOF
name: theater-room-manager-deployment
application: theater-room-manager
placement:
match-site-labels: >
system/type = edge
EOF
Application versioning
In the examples above, the application had a version field, for example:
name: theater-room-manager
version: "1.0"
The version field is optional, if you enter a version field for the application, the version needs to be changed for any edits.
In the deployment you have two options for specifying the application version:
*
: which implies latest application version- reference an explicit version string.
Recommended workflows
For real operational scenarios we strongly recommend that you manage your application specifications and deployments in your code repository and let your CI/CD pipeline push these to the Control Tower. Working directly with the Control Tower UI is good when you develop/experiment/test and learn the system.
Anyhow, what are the recommended versioning principles in different phases of your project?
Early development and experimenting
You are maybe learning the system or are early in your development project and want a quick iteration cycle for your code changes. In this state an easy principle is to not have a version field on the application and use “*” as version in the deployment. In this way any modifications of the applications will take immediate effect on the matched sites.
Operations
When you move on to operational deployments you should most likely go for explicit versions in your applications and deployments. In that way you can add new application versions to the Avassa system but define a later deployment stage. It is also an option to use “*” in the deployment and explicit versions for your applications. That has then the effect that any new versions will be automatically deployed if that is the desired behavior.
You can read more on application versioning in Application versioning