The following command will download the container image from dockerhub…
singularity pull library://alpine
Singularity will not only download the image, but also create a snapshot of the image for singularity to pick up directly.
ls -l alpine_latest.sif
It uses the Singularity Image Format.
singularity shell alpine_latest.sif cat /etc/alpine-release exit
The immediate snapshot of the docker container image into a SIF image allows for the image to be used on multiple hosts at a time. For that we just move the image onto a shared filesystem.
mkdir /fsx/singularity-images mv alpine_latest.sif /fsx/singularity-images/ singularity shell /fsx/singularity-images/alpine_latest.sif
That’s how most of the pure HPC container runtime operate. Create a snapshot of some form and place it on a shared filesystem. The snapshot will be mounted on the host and thus the meta-data server is local to the host. When running at scale a file-lookup will be served by the local kernel instead of hammering the meta-data server of the shared file-system. When dealing with complex (and poorly structured python containers) you can easily stress your MD server if you just put it in a shared file-system. :)
As we’ll see in future additions to this workshop, Singularity is one option to run HPC containers. Section in a FOSDEM21 talk about Singualrity as a runtime