Thank you very much for your reply!
- Another tricky part is to somehow upload/download files to/from the node the job is running on. As I already mentioned, each process communicates with some server, sometimes it is necessary to transfer files to/from submit node while job is running. Is
there a way to do this? I mean, definitely there is an upload&download mechanisms in HTCondor, since it transfers files before running the job and when job finishes (in case there is no shared file system). Can I somehow use this mechanism to upload/download
files while job is running? Is there an API or command-line tool for this?
Parallel universe jobs have a few extra environment variables set for them that you might find useful when using chirp:
_CONDOR_PROCNO: Each node in the job has a specific number, going from 0 to machine_count - 1, and this variable contains that number
_CONDOR_REMOTE_SPOOL_DIR: A scratch directory on the submit node specifically for this parallel universe job, good for sharing files among nodes (using chirp)
It looks like exactly what I was looking for. It seems the "condor_chirpâ works on the machine where job is running. Is there something I can use on the submit machine for the same purpose? I mean, is there a way to transfer files between submit machine and a node where job is running by entering some command on the submit machine itself? Again, it seems âcondor_chirpâ works only if I run it from the job itself.
- Is it possible to somehow output a list of slots available in external pools? I mean, I can see slots in my our pool, but I cannot see slots available in pools my pool can flock to. It is strange, cause I see that job is running in
condor_q, but I do not see where,
condor_status reports only about nodes/slots in my pool.
condor_status -pool <hostname/ip of external pool's central manager>
- Suspicious issue is that during my experiments I several times came to the state in which all slots were in âClaimed â Idleâ state. They kept in this state for a rather long time (approx. half an hour or an hour). After this they woke up and continue processing
jobs. I am still not sure how to reproduce this. Probably it is connected to restarting of central HTCondor manager (systemctl restart condor), but I am not 100% sure. Again, ideas?
When using the dedicated scheduler (i.e. when submitting parallel universe jobs), the dedicated scheduler is configured to keep claims on any resources it gets, for a configurable amount of time, in case other jobs needing dedicated resources are sitting
in the queue or are submitted shortly after those resources go idle. The config setting to adjust here is "UNUSED_CLAIM_TIMEOUT". This should be 10 minutes by default, so I'm not sure why you're seeing half an hour.
Thank you. I will check if changing the value of this variable helps. BTW, I cannot find any mentions of âUNUSED_CLAIM_TIMEOUTâ in the docs. Am I missing something or this is simply not documented yet?
Also keep in mind that if you have any idle parallel universe jobs in your queue, the dedicated scheduler is going to try its best to claim resources for each of those jobs, and those resources are going to be claimed/idle until the scheduler is able to
claim enough resources for the job to start.
This strategy is fine for me now. Can I be sure that deadlock will not happen if there are multiple parallel jobs are waiting in the queue at the same time?
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
You can also unsubscribe by visiting
The archives can be found at: