Support for NVIDIA Jetson
Support for NVIDIA Tegra GPU
The Avassa solution features support for the built-in GPU in NVIDIA Jetson boards. Provided the prerequisites are fulfilled, the GPU will show up and can be used as described in the GPU passthrough tutorial.
Linux distro support
The recommended Linux distro for running Edge Enforcer on NVIDIA Jetson is Ubuntu.
Ubuntu
The Ubuntu distribution designed to run on NVIDIA Jetson Orin platform can be downloaded from https://ubuntu.com/download/nvidia-jetson. Follow the instructions document in the "Get started" section in order to install the OS onto the board.
- the NVIDIA Tegra firmware should be installed as described in the instructions document in order to get support for the built-in GPU
- the NVIDIA Container Toolkit should be installed to pass through the integrated GPU into the containers, it is also a prerequisite for the GPU to be discovered by the Edge Enforcer
- the CUDA and other libraries do not need to be installed on the host system as they would typically be bundled into the container applications managed by the Edge Enforcer
Jetson Linux
NVIDIA Jetson Linux lacks the required kernel modules to run the Edge Enforcer, most importantly mission-critical modules required to set up the virtual network for the containers. This is also indicated by the pre-flight check at Edge Enforcer installation time.
In order to use the Jetson Linux a custom kernel needs to be built as specified in the NVIDIA Jetson Linux Developer Guide. When building a custom kernel make sure to include all modules specified in the Kernel Configuration Options section of the Host requirements document.
Other distros
Other Linux distros compatible with NVIDIA Jetson may be supported if they satisfy the requirements outlines in the Host requirements document.
Verification
In order to verify the installation, run a sample application inside the l4t-jetpack container .
To begin with, create a label for the NVIDIA GPU by appending the following snippet of configuration to either the system settings, or a specific site.
supctl merge system sites jetson <<EOF
gpu-labels:
- label: tegra
gpu-patterns:
- id == "Tegra-*"
EOF
Verify that the label is applied to the integrated GPU:
supctl show -s jetson system cluster hosts --fields hostname,gpus
- hostname: jetson
gpus:
- id: Tegra-a1a1af71-11bc-4327-afcb-855d8ccf1ba6
vendor: NVIDIA
name: NVIDIA Jetson Orin Nano Developer Kit
labels:
- tegra
Once this is done, the label may be referenced from the application specification by the tenants that have access to it. Create the following application specification:
supctl create applications <<EOF
name: jetpack-test
version: 36.4.0.0
services:
- name: srv
mode: one-per-matching-host
volumes:
- name: test-script
config-map:
items:
- name: test.sh
data-verbatim: |
#!/bin/sh -e
rm -rf /tmp/cudnn_samples_v9
cp -r /usr/src/cudnn_samples_v9/ /tmp/
make -C /tmp/cudnn_samples_v9/conv_sample/
cd /tmp/cudnn_samples_v9/conv_sample/
./conv_sample
file-mode: "555"
containers:
- name: jetpack
mounts:
- volume-name: test-script
files:
- name: test.sh
mount-path: /tmp/test.sh
mode: read-only
image: nvcr.io/nvidia/l4t-jetpack:r36.4.0
cmd:
- sleep
- infinity
gpu:
labels:
- tegra
EOF
Deploy the application to the relevant sites and once the application is running run the
/tmp/test.sh
command inside the container:
supctl do -s jetson applications jetpack-test service-instances srv-1 \
containers jetpack exec /tmp/test.sh
The script should compile and run a CUDA application and eventually output "Test PASSED".