Re: [Gems-users] Change in cpu frequency


Date: Wed, 15 Aug 2007 18:11:42 -0500
From: Dan Gibson <degibson@xxxxxxxx>
Subject: Re: [Gems-users] Change in cpu frequency
GEMS is pretty much divorced from what Simics thinks is "one cycle". Simics generally uses an artificially low CPU speed to effectively speed up I/O -- recall that Simics models a CPI 1.0 processor anyway when no timing models are installed.

Ruby, on the other hand, maintains its own notion of time, namely every [SIMICS_RUBY_MULTIPLIER] simics cycles is equivalent to one Simics cycle, modulo some transient behaviors. Thus, a 75MHz and a 5GHz processor both observe the same *relative* memory system latencies with Ruby, when measured in processor cycles.

To put it another way, changing Simics's MHz doesn't affect Ruby.

That said, let me address your other issues (long simulation time and apparent deadlock).

The long simulation time is undoubtedly an artifact of the changing of the relative I/O speed. In order to simulate a "disk access", Simics has to run about 100x longer when simulating a 5 GHz processor than it does w/ 75 MHz. Hence, we generally opt for a "slow" CPU speed to improve simulator performance due to compulsory I/O behavior. Since all latencies are relative, changing Simics's CPU speed only affects I/O, not memory latency.

As for the deadlock concern, I don't know what might be causing that. Whereabouts in Ruby's code is the error arising?

Regards,
Dan

Lide Duan wrote:
I have got some problems when I was trying to change the cpu frequency of my checkpoints.

Basically what I have done is: I added some counters to the ruby network code to record the delay cycles of each kind of messages, and dumped the results every some number of ruby cycles. I run the modified ruby on some checkpoints, e.g. SPECjbb2005, barnes, ocean, etc. The results seemed to be reasonable: some of the messages encountered some delays, and the delay cycles would increase if I reduce the bandwidth of the links or the finite buffer size. However, these checkpoints were created with cpu frequency at 75MHZ which was too low for the modern machines. I recreated some checkpoints with cpu frequency at 5GHZ, and supposed that the delay cycles would be much larger than those checkpoints at 75MHZ due to the much higher frequency.

However, strange things happened. For the jbb_5ghz checkpoint, I have run it for several days with ruby loaded, but it never reached the first magic instruction which was used to end the ruby warm up phase and start the real workload. For barnes_5ghz, I got the warm up checkpoint, but the simulation results from that were quite strange: the delay cycles of the messages are almost zero, and the simulation also stopped soon with a "Possible Deadlock detected" complain from GEMS. I am pretty confused because I didn't make more modification to ruby after changing the checkpoints, the only difference is the cpu frequency of the checkpoints.

So I am wondering is there any restriction on the cpu frequency that GEMS can support? I didn't find anything related in ruby configuration. How does GEMS deal with the cpu frequency? Also, what's the reasonable value of cpu frequency for current research? Is 75MHZ too low or 5GHZ too high?

Thanks,
Lide
------------------------------------------------------------------------

_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/"; to your search.


--
http://www.cs.wisc.edu/~gibson [esc]:wq!

[← Prev in Thread] Current Thread [Next in Thread→]