[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

AW: [Condor-users] IDLE job



> 236 match, but prefer another specific job despite its worse user-priority

This information is misleading. You have to look at the log files of the
submitting machine. Also, have a look at the log-file of your job. Maybe
your job was started on a machine. If so, you have to look at the log files
of the executing machine.
Hopefully you get some more information this way...

Bye,
Thomas Bauer
--------------------------------------------
Westfaelische Wilhelms-Universitaet Muenster
Institut fuer Festkoerpertheorie
Wilhelm-Klemm-Str. 10
D 48149 Muenster
++49 (251) 8339040
--------------------------------------------
-----Ursprüngliche Nachricht-----
Von: condor-users-bounces@xxxxxxxxxxx
[mailto:condor-users-bounces@xxxxxxxxxxx] Im Auftrag von marco Netscape
Gesendet: Montag, 28. Juni 2004 11:41
An: Condor-Users Mail List
Betreff: Re: [Condor-users] IDLE job

Hi, related to the previous mail, this is the report for that IDLE job:


061.000:  Run analysis summary.  Of 236 machines,
      0 are rejected by your job's requirements
      0 reject your job because of their own requirements
      0 match, but are serving users with a better priority in the pool
    236 match, but prefer another specific job despite its worse
user-priority
      0 match, but will not currently preempt their existing job
      0 are available to run your job
        Last successful match: Mon Jun 28 11:02:21 2004

What means "236 match, but prefer another specific job despite its worse
user-priority"?



marcofuics@xxxxxxxxxxxx wrote:

> Hi *
> I'm using condor (the latest version) on a linux cluster.
> When I submit a job to Condor i can see that the Job remains in the 
> IDLE status for several minutes ..... ( too many minutes )
>
> condor_q
> ID      OWNER            SUBMITTED     RUN_TIME     ST     PRI     
> SIZE    CMD
>  61.0   eo003                   6/28 10:41           
> 0+00:00:28         I      0           0.5      bash -c date +%s;
>
> But all the nodes of the cluster are free of heavy processes: in fact  
> ( Here follows only a piece of the report )
> condor_status
> Name          OpSys       Arch   State      Activity   LoadAv Mem   
> ActvtyTime
>
> vm1@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+00:00:03
> vm2@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:16
> vm3@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:12
> vm4@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:22
> vm1@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:08
> vm2@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:05
> vm3@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:19
> vm4@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:16
> .......
> .......
> vm1@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:19
> vm2@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:16
> vm3@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:12
> vm4@xxxxxxxxx LINUX       INTEL  Unclaimed  Idle       0.000   503  
> 0+03:35:08
>
>
>
> I have done the standard installation and set-up for the whole cluster.
>
>
>
>
>
> Can I speed Up these jobs tuning a set of parameters?
> Why a job remain in the IDLE state?
>
>
>
>
>
>
>
> _______________________________________________
> Condor-users mailing list
> Condor-users@xxxxxxxxxxx
> http://lists.cs.wisc.edu/mailman/listinfo/condor-users


_______________________________________________
Condor-users mailing list
Condor-users@xxxxxxxxxxx
http://lists.cs.wisc.edu/mailman/listinfo/condor-users