[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [condor-users] java memory requirement

On Fri, 9 Apr 2004, James Wilgenbusch wrote:

> I applied both suggestions and things seemed to work for a while.
> More recently, however, I'm running into serious problems with the
> schedd.  Here's a snippet from the schedlog:
> 4/9 22:23:03 DaemonCore: Command Socket at <>
> 4/9 22:23:18 ERROR "Error: bad record with op=103 in corrupt logfile"
> at line 723 in file classad_log.C
> I've now set things back to the previous state and would like to know
> what log file I need to get rid of so that I can restart the schedd
> without running into this issue?

It looks like the job_queue.log file is corrupted. As the name implies, it
contains the job queue for the schedd. By deleting it, you will remove all
jobs from your queue and reset cluster ids for new jobs to 1.

It is possible to hand-edit the job_queue.log to fix the corrupt entries,
if you're desparate to not lose the submitted jobs.

|             Jaime Frey             |There are 10 types of people in|
|         jfrey@xxxxxxxxxxx          |the world: Those who understand|
|   http://www.cs.wisc.edu/~jfrey/   |  binary, and those who don't  |
Condor Support Information:
To Unsubscribe, send mail to majordomo@xxxxxxxxxxx with
unsubscribe condor-users <your_email_address>