The process running on an exec node running under the submitter’s ID has the same permissions that user would have if they simply logged in.
It’s possible to set up slot-specific users to run each job under its own separate account, but that requires input and output data transfers since the job would
no longer have access to protected files in the home directory of the submitting user, so it necessitates a bit more thoughtful job design.
You could also run all jobs inside a Singularity container, for example, and restrict which directories may be bound into that container.
At the very least, mounting /tmp and /var/tmp under scratch gives each job their own private /tmp and /var/tmp directories.
Using SELinux can also help, since it can block access even if the UNIX permissions are improperly configured.
What is their underlying concern? It’s pretty hard to inadvertently wipe out someone else’s work on a properly-configured Linux system.
Michael V Pelletier
Digital Transormation & Innovation
I am working on expanding our existing workstation condor pool and have gotten some concerned questions about what directories a submitted job can access in the local file system beyond <$(LOCAL_DIR)/execute>
and any temp directories set up by HTCondor daemons. Reading through the security section of the docs, the projects effort focuses on unauthorized access to the pool and presuming granted access will not abuse resource access permissions.