lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 22 May 2007 13:52:51 +0200
From:	"Dmitry Adamushko" <dmitry.adamushko@...il.com>
To:	"Peter Williams" <pwil3058@...pond.net.au>
Cc:	"Ingo Molnar" <mingo@...e.hu>,
	"Linux Kernel" <linux-kernel@...r.kernel.org>
Subject: Re: [patch] CFS scheduler, -v12

On 22/05/07, Peter Williams <pwil3058@...pond.net.au> wrote:
> > [...]
> > Hum.. I guess, a 0/4 scenario wouldn't fit well in this explanation..
>
> No, and I haven't seen one.

Well, I just took one of your calculated probabilities as something
you have really observed - (*) below.

"The probabilities for the 3 split possibilities for random allocation are:

       2/2 (the desired outcome) is 3/8 likely,
       1/3 is 4/8 likely, and
       0/4 is 1/8 likely.            <-------------------------- (*)
"

> The split that I see is 3/1 and neither CPU seems to be favoured with
> respect to getting the majority.  However, top, gkrellm and X seem to be
> always on the CPU with the single spinner.  The CPU% reported by top is
> approx. 33%, 33%, 33% and 100% for the spinners.

Yes. That said, idle_balance() is out of work in this case.

> If I renice the spinners to -10 (so that there load weights dominate the
> run queue load calculations) the problem goes away and the spinner to
> CPU allocation is 2/2 and top reports them all getting approx. 50% each.

I wonder what would happen if X gets reniced to -10 instead (and
spinners are at 0).. I guess, something I described in my previous
mail (and dubbed "unlikely cospiracy" :) could happen, i.e. 0/4 and
then idle_balance() comes into play..

ok, I see. You have probably achieved a similar effect with the
spinners being reniced to 10 (but here both "X" and "top" gain
additional "weight" wrt the load balancing).

> I'm playing with some jitter experiments at the moment.  The amount of
> jitter needs to be small (a few tenths of a second) as the
> synchronization (if it's happening) is happening at the seconds level as
> the intervals for top and gkrellm will be in the 1 to 5 second range (I
> guess -- I haven't checked) and the load balancing is every 60 seconds.

Hum.. the "every 60 seconds" part puzzles me quite a bit. Looking at
the run_rebalance_domain(), I'd say that it's normally overwritten by
the following code

               if (time_after(next_balance, sd->last_balance + interval))
                        next_balance = sd->last_balance + interval;

the "interval" seems to be *normally* shorter than "60*HZ" (according
to the default params in topology.h).. moreover, in case of the CFS

                if (interval > HZ*NR_CPUS/10)
                        interval = HZ*NR_CPUS/10;

so it can't be > 0.2 HZ in your case (== once in 200 ms at max with
HZ=1000).. am I missing something? TIA


>
> Peter

-- 
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ