[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070414120220.GA2346@elte.hu>
Date: Sat, 14 Apr 2007 14:02:20 +0200
From: Ingo Molnar <mingo@...e.hu>
To: surya.prabhakar@...ro.com
Cc: kernel@...ivas.org, linux-kernel@...r.kernel.org,
torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
npiggin@...e.de, efault@....de, arjan@...radead.org,
tglx@...utronix.de, wli@...omorphy.com
Subject: Re: [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40
* surya.prabhakar@...ro.com <surya.prabhakar@...ro.com> wrote:
> Hi Ingo,
> Did a test with massive_intr.c on a standard linux desktop.
> for vanilla, con's Sd-0.40 and cfs.
thanks!
> [surya@...egenie tests]$ ./massive_intr 10 10
> 002435 00000120
> 002439 00000120
> 002441 00000120
> 002434 00000120
> 002436 00000120
> 002440 00000120
> 002432 00000120
> 002437 00000120
> 002433 00000120
> 002438 00000120
>
> Felt it is too much fair, will try another pass ;)
hehe :)
> [surya@...egenie tests]$ ./massive_intr 10 10
> 002961 00000121
> 002965 00000120
> 002964 00000121
> 002959 00000120
> 002956 00000121
> 002963 00000121
> 002960 00000121
> 002962 00000121
> 002958 00000122
> 002957 00000122
btw., other schedulers might work better with some more test-time: i'd
suggest to use 60 seconds (./massive_intr 10 60) [or maybe more, using
more threads] to see long-term fairness effects.
> [...] Will be trying out ringtest in the next round.
cool. ringtest.c is intended to be used the following way: start it, it
will generate a 99% busy system (but it is using a ring of 100 tasks,
where each tasks runs for 100 msecs then sleeps for 1 msec, so every
task gets a turn every 10 seconds). If you add a pure CPU hog to the
system, for example an infinite shell loop:
while :; do :; done &
then a 'fair' scheduler would give roughly 50% of CPU time to the CPU
hog (and the ringtest.c tasks take up the other 50%).
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists