[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Condor-users] architectural question/issue
- Date: Mon, 28 Aug 2006 08:02:56 -0700
- From: "bruce" <bedouglas@xxxxxxxxxxxxx>
- Subject: [Condor-users] architectural question/issue
i'm grappling with an issue that i can't seem to get my hands around...
i'm creating a test app, where my app spawns off a number of child
processes. each app might create 100's-1000's of child processes. each child
process preforms an operation, and then needs to write the output of the
operation to a db.
here's the issue... if i use a db like mysql, i can only handle a limited
number of simultaneous connections based on mem/system resources. for a
reasonable machine, this might be 1000 simultaneous connections, which means
my child apps are going to be in a wait state as the child app waits to get
an open connection...
i could also have each child app simply write a raw output/text file, and
have some sort of external process be responsible for doing the read/write
from the files, and writing the data to the db.. but here again, this would
be slow, as it would be sequential in nature unless i created a multi
threaded kind of app that used some sort of connection pooling approach.
it doesn't appear that the file approach would be faster than waiting for
the open connection...
my question.... is there potentially a faster/better way to be able to slam
data at a fast rate into a db.. i could even set up some sort of ditributed
db farm/apps if that makes sense...
at this point, i'm grappling for ideas... searching of google hasn't caused
the proverbial light to go on!