[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[HTCondor-users] Protecting jobs from one another ( aws Re: condor_ssh_to_job)



On 8/22/2013 6:36 PM, Rich Pieri wrote:
Dimitri Maziuk wrote:
The other is per-slot users. I'm not sure I buy the "trample over other
nobody's jobs' files" argument: if you sandbox each job properly in its
own per-pid (chroot'ed?) filespace, that should take some serious

It takes almost no effort. All a malicious user needs to do is submit a
job that runs on the same node as the victim's job. chroot jails do not
protect a process's address space or the process itself. If a process is
running as UID nobody then any other process running as UID nobody can
peruse and scribble on the first process's allocated memory. Other
processes running as UID nobody can issue signals to the first process
causing it to crash or dump core or what have you.


Just for the record, note that HTCondor v8.0.x support both chrooting jobs (although setting up chroot jails can be a chore due to shared libraries etc) and also placing jobs in their own pid-namespace if you set
  USE_PID_NAMESPACES = True
By telling HTCondor to place jobs into a separate pid namespace, you prevent several of the concerns mentioned above - a job cannot attach or send signals or view the memory of another job, even if the other job is running as the same UID.

For info see

http://research.cs.wisc.edu/htcondor/manual/v8.0/3_12Setting_Up.html#SECTION0041210000000000000000

Over the past couple years HTCondor has added a lot of mechanisms to isolate jobs from one another (i.e. sandbox cpu, memory, /tmp, visible pids, filesystem...) - at HTCondor Week 3 months ago there was a tutorial that gave an overview of all of them. The slides may be interesting to folks; see

http://research.cs.wisc.edu/htcondor/HTCondorWeek2013/presentations/ThainG_BoxingUsers.pdf

regards,
Todd