lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTinf7fj5A4DnQOWdj8QeMhf4exPGgpMUdTGA9mTC@mail.gmail.com>
Date:	Wed, 6 Oct 2010 01:23:55 -0700
From:	Nikhil Rao <ncrao@...gle.com>
To:	Mike Galbraith <efault@....de>
Cc:	Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>,
	Venkatesh Pallipadi <venki@...gle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/3][RFC] Improve load balancing when tasks have large
 weight differential

On Sun, Oct 3, 2010 at 8:08 PM, Mike Galbraith <efault@....de> wrote:
> On Wed, 2010-09-29 at 12:32 -0700, Nikhil Rao wrote:
>> The closest I have is a quad-core dual-socket machine (MC, CPU
>> domains). And I'm having trouble reproducing it on that machine as
>> well :-( I ran 5 soaker threads (one of them niced to -15) for a few
>> hours and didn't see the problem. Can you please give me some trace
>> data & schedstats to work with?
>
> Booting with isolcpus or offlining the excess should help.
>

Sorry for the late reply. Booting with isolcpus did the trick, thanks.

... and now to dig into why this is happening.

-Thanks,
Nikhil

>> Looking at the patch/code, I suspect active migration on the CPU
>> scheduling domain pushes the nice 0 task (running on the same socket
>> as the nice -15 task) to the other socket. This leaves you with an
>> idle core on the nice -15 socket, and with soaker threads there is no
>> way to come back to a 100% utilized state. One possible explanation is
>> the group capacity for a sched group in the CPU sched domain is
>> rounded to 1 (instead of 2). I have a patch below that throws a hammer
>> at the problem and uses group weight instead of group capacity (this
>> is experimental, will refine it if it works). Can you please see if
>> that solves the problem?
>
> Nope, didn't help.  I'll poke at it, but am squabbling elsewhere atm.
>
>        -Mike
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ