lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 04 Oct 2010 05:08:37 +0200
From:	Mike Galbraith <efault@....de>
To:	Nikhil Rao <ncrao@...gle.com>
Cc:	Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>,
	Venkatesh Pallipadi <venki@...gle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/3][RFC] Improve load balancing when tasks have large
 weight differential

Sorry for the late reply.  (fired up your patchlet bright and early so
it didn't rot in my inbox any longer;)

On Wed, 2010-09-29 at 12:32 -0700, Nikhil Rao wrote:
> On Tue, Sep 28, 2010 at 6:45 PM, Mike Galbraith <efault@....de> wrote:
> > On Tue, 2010-09-28 at 14:15 -0700, Nikhil Rao wrote:
> >
> >> Thanks for running this. I've not been able to reproduce what you are
> >> seeing on the few test machines that I have (different combinations of
> >> MC, CPU and NODE domains). Can you please give me more info about
> >> your setup?
> >
> > It's a plain-jane Q6600 box, so has only MC and CPU domains.
> >
> > It doesn't necessarily _instantly_ "stick", can take a couple tries, or
> > a little time.
> 
> The closest I have is a quad-core dual-socket machine (MC, CPU
> domains). And I'm having trouble reproducing it on that machine as
> well :-( I ran 5 soaker threads (one of them niced to -15) for a few
> hours and didn't see the problem. Can you please give me some trace
> data & schedstats to work with?

Booting with isolcpus or offlining the excess should help.

> Looking at the patch/code, I suspect active migration on the CPU
> scheduling domain pushes the nice 0 task (running on the same socket
> as the nice -15 task) to the other socket. This leaves you with an
> idle core on the nice -15 socket, and with soaker threads there is no
> way to come back to a 100% utilized state. One possible explanation is
> the group capacity for a sched group in the CPU sched domain is
> rounded to 1 (instead of 2). I have a patch below that throws a hammer
> at the problem and uses group weight instead of group capacity (this
> is experimental, will refine it if it works). Can you please see if
> that solves the problem?

Nope, didn't help.  I'll poke at it, but am squabbling elsewhere atm.

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ