I'm not sure I understood what exactly you want to achieve. Say, youExactly!
have a web portal. Your users invoke an application via this portal and
are not aware of Condor behind the curtain. Each invocation results in
multiple jobs submission to Condor. All jobs are submitted under the
same user ( right? ), and are run one after another. However you want to
have interleaving, i.e., if we mark all jobs of user1 as 1, user2 as 2
and etc, than the submission:
11112222 should run as 121212? Is it correct?
If yes, I think there are several solutions. 1) You have this "runas" command, which allows you to run things underrunas itself is not a feasible solution because it needs an interactive input of username and password. There are other implementations that allow you to send the password as an argument but the output is not redirected to stdout, which means that i cannot read the clusternumber... Windows pipes have a simple Unix like interface which does not work in complex situations and a very complicated native interface that is somehow beyond my comprehension :-(
another user ID. so if you submit with this "runas" it should work the
way you want. The only problem is that you have to have all the users on
the submission machine and they should pass their password as a
parameter to runas.
That sounds interesting, though I'm not sure if the system behaves better if I add 20-30 schedd instances (see my post about extremly small jobs..)2) Another way ( not sure if it works though) is to run multiple schedds every time new user submits new jobs - one schedd per virtual user. If it works, the only thing you need is a script to kill schedd after it finishes all the jobs. If it works, it seems to be the easiest thing to do.
3) And finally, you can play with job priority in the queue, periodically updating the queue according to the number of users youBefore I intervene into the scheduling of condor, I restart my project using SGE or Maui. Intelligent scheduling is the task of Condor, not mine.
have.