lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <556F1411.6050206@fb.com>
Date:	Wed, 3 Jun 2015 10:49:53 -0400
From:	Josef Bacik <jbacik@...com>
To:	Peter Zijlstra <peterz@...radead.org>,
	Rik van Riel <riel@...hat.com>
CC:	<mingo@...hat.com>, <linux-kernel@...r.kernel.org>,
	<umgwanakikbuti@...il.com>, <morten.rasmussen@....com>,
	kernel-team <Kernel-team@...com>
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for
 BALANCE_WAKE

On 06/03/2015 10:24 AM, Peter Zijlstra wrote:
> On Wed, 2015-06-03 at 10:12 -0400, Rik van Riel wrote:
>
>> There is a policy vs mechanism thing here. Ingo and Peter
>> are worried about the overhead in the mechanism of finding
>> an idle CPU.  Your measurements show that the policy of
>> finding an idle CPU is the correct one.
>
> For his workload; I'm sure I can find a workload where it hurts.
>
> In fact, I'm fairly sure Mike knows one from the top of his head, seeing
> how he's the one playing about trying to shrink that idle search :-)
>

So the perf bench sched microbenchmarks are a pretty good analog for our 
workload.  I run

perf bench sched messaging -g 100 -l 10000
perf bench sched pipe

5 times and average the results to get an answer, really the messaging 
one is closest one and the one I look at.  I get like 56 seconds of 
runtime on plain 4.0 and 47 seconds patched, it's how I check my little 
experiments before doing the full real workload.

I don't want to tune the scheduler just for our workload, but the 
microbenchmarks we have are also showing the same performance 
improvements.  I would be super interested in workloads where this patch 
doesn't help so we could integrate that workload into perf sched bench 
to make us more confident in making policy changes in the scheduler.  So 
Mike if you have something specific in mind please elaborate and I'm 
happy to do the legwork to get it into perf bench and to test things 
until we're happy.

In the meantime I really want to get this fixed for us, I do not want to 
pull some weird old patch around for the next year until we rebase again 
next year, and then do this whole dance again.  What would be the way 
forward for getting this fixed now?  Do I need to hide it behind a 
sysctl or config option?  Thanks,

Josef
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ