lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b647ffbd0705091224m75bf8caawa0019ecfe8bf82e7@mail.gmail.com>
Date:	Wed, 9 May 2007 21:24:17 +0200
From:	"Dmitry Adamushko" <dmitry.adamushko@...il.com>
To:	vatsa@...ibm.com
Cc:	"Ingo Molnar" <mingo@...e.hu>, "Con Kolivas" <kernel@...ivas.org>,
	"Mike Galbraith" <efault@....de>,
	"Linux Kernel" <linux-kernel@...r.kernel.org>
Subject: Re: Definition of fairness (was Re: [patch] CFS scheduler, -v11)

On 09/05/07, Srivatsa Vaddagiri <vatsa@...ibm.com> wrote:
> On Tue, May 08, 2007 at 05:04:31PM +0200, Ingo Molnar wrote:
> > thanks Mike - value 0x8 looks pretty good here and doesnt have the
> > artifacts you found. I've done a quick -v11 release with that fixed,
> > available at the usual place:
> >
> >     http://people.redhat.com/mingo/cfs-scheduler/
> >
> > with no other changes.
>
> Ingo,
>         I had a question with respect to the definition of fairness used, esp
> for tasks that are not 100% cpu hogs.
>
> Ex: consider two equally important tasks T1 and T2 running on same CPU and
> whose execution nature is:
>
>         T1 = 100% cpu hog
>         T2 = 60% cpu hog (run for 600ms, sleep for 400ms)
>
> Over a arbitrary observation period of 10 sec,
>
>         T1 was ready to run for all 10sec
>         T2 was ready to run for 6 sec
>

One quick observation.

Isn't it important for both processes to have the same "loops_per_ms" value?

At least on my machine, it's not always the case. See how results
differ in the following 2 tests:

(1) when loops_per_ms happen to be quite different;
(2) ----//---- pretty same values.

Both are for mainline 2.6.19 and both processes have equal characteristics.
namely,

./cpuhog 600 400 10

I run them one by one from different shells.

(1)

dimm@...m:/mnt/storage/kernel/sched-bench> ./cpuhog 600 400 10
loops_per_ms = 94517.958412  <---------- (*)

run time = 600 ms (56710775 loops), sleep time = 400 ms, epoch time = 10 s
Obtained 6.39 seconds of execution time in 10 elapsed seconds
Obtained 4.52 seconds of execution time in 11 elapsed seconds
Obtained 4.47 seconds of execution time in 10 elapsed seconds
Obtained 4.47 seconds of execution time in 10 elapsed seconds
Obtained 4.53 seconds of execution time in 10 elapsed seconds
Obtained 4.14 seconds of execution time in 10 elapsed seconds
...

dimm@...m:/mnt/storage/kernel/sched-bench> ./cpuhog 600 400 10
loops_per_ms = 51599.587203  <------------- (**)

run time = 600 ms (30959752 loops), sleep time = 400 ms, epoch time = 10 s
Obtained 3.91 seconds of execution time in 10 elapsed seconds
Obtained 3.16 seconds of execution time in 10 elapsed seconds
Obtained 3.18 seconds of execution time in 10 elapsed seconds
Obtained 3.19 seconds of execution time in 10 elapsed seconds
Obtained 2.97 seconds of execution time in 10 elapsed seconds

Look at a difference between (*) and (**) and we can see the reported
results are different.
Although, one would expect both are treated "equally" with the current
scheduler (whatever one puts in this meaning :)


(2) now another attempt.

dimm@...m:/mnt/storage/kernel/sched-bench> ./cpuhog 600 400 10
loops_per_ms = 96711.798839

run time = 600 ms (58027079 loops), sleep time = 400 ms, epoch time = 10 s
Obtained 4.32 seconds of execution time in 10 elapsed seconds
Obtained 3.84 seconds of execution time in 11 elapsed seconds
Obtained 3.86 seconds of execution time in 11 elapsed seconds
Obtained 3.84 seconds of execution time in 11 elapsed seconds
Obtained 3.86 seconds of execution time in 11 elapsed seconds
...

dimm@...m:/mnt/storage/kernel/sched-bench> ./cpuhog 600 400 10
loops_per_ms = 96899.224806

run time = 600 ms (58139534 loops), sleep time = 400 ms, epoch time = 10 s
Obtained 4.28 seconds of execution time in 10 elapsed seconds
Obtained 3.88 seconds of execution time in 11 elapsed seconds
Obtained 3.88 seconds of execution time in 10 elapsed seconds
Obtained 3.40 seconds of execution time in 10 elapsed seconds
Obtained 3.42 seconds of execution time in 10 elapsed seconds
Obtained 3.89 seconds of execution time in 11 elapsed seconds
...


In this case, "loops_per_ms" happened to be pretty equal so the
reported results look pretty similar. "top" showed very close numbers
in this case as well.

Maybe it's different in your case (besides I don't claim that
everything is ok with SD and CFS) but I guess this could be one of the
reasons.

At least, fork()-ing the second process (with different
run_time/sleep_time characteristics) from the first one would ensure
both have the same "loop_per_ms".


-- 
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ