with remote submit, there is no Shadow process running on your submit machine to keep track of the job. It all happens on the Schedd you submitted to. So your local Condor doesn't have to stay connected to the pool but also doesn't know when the job finishes. Thus, you have to tell it manually when to fetch the files.
Use condor_transfer_data to have Condor transfer all files as it would do with a regular job. It should look something like this:
$ condor_transfer_data -name condor02.hpc.com all
See the reference for more information .
If you encounter problems with Condor cleaning up your job before you had a chance to fetch it, you will have to tell it to stay in the queue until retrieved.
The documentation suggests adding this to the job
leave_in_queue = (JobStatus == 4) && ((StageOutFinish =?= UNDEFINED) || (StageOutFinish == 0))
but it kept jobs indefinitely for me on account of StageOutFinish not being set even after the transfer. Might have been a problem on my side, though.
On 03/27/2013 04:54 PM, Javi Roman wrote: