Replies: 2 comments 8 replies
-
This works really well, as is, when you just push it as an oras artifact! I made an oras-csi plugin that allowed me to easily use oras to pull and then mount across nodes (examples recipe https://github.com/converged-computing/oras-csi/blob/main/examples/basic/pod/pod.yaml) |
Beta Was this translation helpful? Give feedback.
3 replies
-
@dtrudg has anyone asked about cross-arch builds? It could be something as simple as using QEMU like the Docker build GitHub action uses maybe? |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
SingularityCE 4.1 is targeted for release in the period November-January 2023.
Please use this thread to comment and suggest features for inclusion in 4.0. We'll update this top post as items are suggested and discussed.
The focus of SingularityCE 4.1 will be bugfix, performance, and functionality improvements related to the OCI mode.
4.1 Planned Features
Multi-layer OCI-SIF
Multi-layered OCI-SIF images #2219
Pull from OCI -> OCI-SIF currently squashes to a single layer, as well as converting to squashfs. We should be able to preserve layers, and assemble the rootfs from multiple layers at runtime. This would allow images to moved into and out of OCI-SIF in a less lossy manner.
SCIF for OCI-mode
Support
--app
in--oci
mode (SCIF in OCI) #1470SCIF is not supported in the OCI-mode. Support can be added to run OCI containers that have been created with a SCIF apps layout.
Alternate
--authfile
for OCI registry interaction--authfile flag for OCI credentials #2098
Allow an alternative file holding OCI registry credentials to be used for auth to OCI registries.
Further support FUSE mounts in native mode
Use squashfuse in native mode when 'allow kernel squashfs = no' #2216
Use fuse2fs in native mode when 'allow kernel extfs = no' #2217
We may wish to investigate the ability to further use FUSE mounts, instead of kernel mounts, in native mode. The
--sif-fuse
option which mounts a SIF squashfs rootfs outside of namespace etc. is a starting point but not sufficient for mounting ext/squashfs overlay partitions. Scope is limited to extfs / squashfs - not the overlayfs mount. This is not limited to being within an unpriv user namespace. Older systems such as SLES12 don't support FUSE in an unpriv userns.Build from Dockerfile to OCI-SIF
Build from Dockerfile to OCI-SIF (build --oci) #2218
The new OCI-mode support pulling OCI containers to an OCI-SIF, and running them. It is not possible to build a container from a Dockerfile nor a definition file into OCI-SIF. Given that OCI-mode concentrates on OCI compatibility, where a Dockerfile is used to build containers, it would make sense for SingularityCE to support build --oci from a Dockerfile into an OCI-SIF.
4.1 Under Consideration
SIF embedded overlays for OCI-mode
OCI-mode does not support using an overlay that is an extfs/squashfs file inside a SIF. This should be addressed. There are questions over how to represent the overlay. Is it a layer, that is annotated?
Instances for OCI-mode
OCI-mode doesn't support background containers, which are known as instances in the native runtime. We may wish to either:
SR-IOV Networking SupportSR-IOV Networking Suport #83
Common server Ethernet and IB cards support SR-IOV, where they can present multiple 'PCIe virtual functions' that act as independent network devices but share the same hardware. E.g. my Mellanox ConnectX3-PRO can be configured so that it presents as 16 network devices per port. This is often used to share a card between multiple VMs. Containers may also benefit from networking being shared at this layer, for general performance reasons and container-specific native IB support. See https://github.com/Mellanox/docker-sriov-plugin and subsequent CNI direction.CDI appears to be the approach to support handling network devices, going forward.Better / emphasize ORAS support
@dtrudg note - haven't transferred this to an issue yet, as I think some of the HPC-Containers OCI discussion might help scope it a bit more first?
People have also expressed liking layers, but I (@vsoch) don't personally think this maps well to Singularity - the single SIF binary is really beneficial in many cases and part of the Singularity design. But in terms of registries, I think more work should be done to make it easy to push a Singularity container to say, an OCI registry. E.g., if/when OCI can add additional content types via the image manifest or artifacts spec, this could be a possibility. It would be really nice to have an OCI registry that can have Singularity containers, however we get there. I don't think Singularity (long term) can be competitive with new technologies like Podman if it's always implementing it's own formats, etc.
Expose cgroups namespace as an option for native singularity runtimes
The cgroups namespace will be addressed for the OCI runtime in 3.10. Consider allowing it to be requested for the native singularity runtime.
Non-root / Default Security Profiles
Non-root / Default Security Profiles #74
SingularityCE can apply security restrictions, such as
selinux
rules,seccomp
filters via a--security
flag. However, this only works forroot
. Since SingularityCE focuses on non-root execution, it would be useful for optional/mandatory profiles to be applied to container runs for non-root users. This would allow security restrictions beyond the usual POSIX permissions to be mandated for container execution. Consider:Note - this is distinct from rootless cgroups v2 limits. The default profiles would be put in place by privileged code, in the same manner as 'blessed' CNI configurations.
This may be re-scoped to part of the OCI runtime integration for native / SIF encapsulated OCI containers. It is questionable how many people will make use of it with the native singularity runtime engine.
Mellanox IB/OFED Library Discovery & Binding
Mellanox IB/OFED Library Discovery & Binding #76
When running a multi-node application that uses Infiniband networking, the user is currently responsible for making sure that required libraries are present in the container, or bound in from the host. We should be able to discover the required libraries on the host, for automatic bind-in when the container distribution is compatible. Not yet assigned to a release milestone
Beta Was this translation helpful? Give feedback.
All reactions