So that may do the trick in your case. The only tricky item would be file delivery - the schedd on the host would need filesystem access to the jobs' files. There's no provision, as far as I know, for transferring files from the machine running condor_submit to the machine running the schedd.
condor_submit -remote <schedd-host> (or, if you have SCHEDD_HOST set, you can just use -spool and skip specifying the host) will transfer files from the machine it ran on to the remote schedd's spool directory. Unfortunately, you then have to (remember to) use condor_transfer_data to get your results back.
You could also submit Condor-C jobs. These require a local schedd, but you only have to be online to transfer the job out and the results back. (Like any grid job, the Condor-C job reflects the state of the corresponding job in the remote queue. If it's idle, but the GridJobStatus is something "in remote queue", it's safe to disconnect.) The job will stay in the remote schedd's queue after completing until the local schedd transfers the results back. (Many vanilla universe jobs can be converted directly into Condor-C jobs by adding 'universe = grid' and 'grid_resource = condor <user@schedd> <cm>' to the submit file.)
It's also possible to automatically convert vanilla universe jobs into Condor-C jobs, using the job router, but I don't recall how that would end up looking to the user.
- ToddM