[Gems-users] Power variations drastic from simulation to simulation


Date: Thu, 21 Apr 2011 13:12:26 -0700
From: Jacob Murray <jamurray@xxxxxxx>
Subject: [Gems-users] Power variations drastic from simulation to simulation
Hi all,
 
I am working on reducing power for NoC systems, and am currently working with the splash-2 fft benchmark. I am running the benchmark with magic-breakpoints before and after the parallel code sections. I start ruby after the 1st magic breakpoint and dump the stats at the 2nd. I have read on the mailing list Alameldeen's paper on workload variability and understand there will be some variability.
 
When I run the same (./fft. -m10 -p16 -n65536 -l4) 10 times in a row, I get a large variability, with a variance near half the mean of my statistics. My question is, is it ok to start ruby at the 1st magic breakpoint and dump the stats at the 2nd? or is there a better method? And if so, I do not understand how my statistics are varying this much.
 
For example, my total router power is ranging from 7-15W, Avg. Latency from 43-58, Cycles from 197K - 460K, and total flits from 280K-780K. I just don't see how I can have such a wide range in these stats by running the same parallel code. If this is expected, how many runs do you do to reach a valid conclusion?
 
Thanks,
Jacob Murray
PhD Candidate EECS, Washington State University
[← Prev in Thread] Current Thread [Next in Thread→]
  • [Gems-users] Power variations drastic from simulation to simulation, Jacob Murray <=