[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] Reading /dev/random



Olaf,

> I'm running MC simulations using condor. and I encountered the
> following problem.
> 
> Condor runs on two Linux cluster running SUSE Linux, one using SUSE
> 7.0 (yes, I know it's old) and Condor 6.4.1, the other SUSE 9.0 and
> Condor 6.6.3.
> 
> I want to seed the random number generator via the /dev/random device.
> For that I'm basically using the following code:
> - ---------------------------------------------
> unsigned long int seed;
> FILE *devrandom = fopen("/dev/random", "r");
> fread(&seed, sizeof(seed), 1 , devrandom);
> - ---------------------------------------------
> (for people that would like to try I've attached a .C-file that wraps
> all the necessary code around these two calls)
> 
> Running this code in the condor standard universe yields in the
> following behaviour:
> 1. the fopen() call succeeds
> 2. the fread() call returns 0 and does NOT set seed.
> 3. feof() is set.
> 
> This is highly confusing. In theory, reading /dev/random should either
> return a random number, or it should block until enough entropy is
> available (see random(4) manpage).
> 
> My questions:
> 1. Is it possible to access /dev/random from a standard universe job?
> 2. Is the described behaviour a bug or a feature?

I have encountered similar problems before.  The root cause is that
the Condor library uses stat() to see how big a file is before trying
to read from it.  The problems I was seeing had to do with failure to
read from /proc/meminfo, but the symptoms were exactly the same.

The answer to question 1 is to add "Local_Files = /dev/random" to your
job submission file.

The answer to question 2 is for the Condor people to answer.  I think
it's a bug because there is no reason for the read() to fail.

-- 
Daniel K. Forrest	Laboratory for Molecular and
forrest@xxxxxxxxxxxxx	Computational Genomics
(608) 262 - 9479	University of Wisconsin, Madison