lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160406133614.7majuigbr7wmduks@floor.thefacebook.com>
Date:	Wed, 6 Apr 2016 09:36:14 -0400
From:	Chris Mason <clm@...com>
To:	Mike Galbraith <mgalbraith@...e.de>
CC:	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	Matt Fleming <matt@...eblueprint.co.uk>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC] select_idle_sibling experiments

On Wed, Apr 06, 2016 at 09:27:24AM +0200, Mike Galbraith wrote:
> > On Tue, 2016-04-05 at 14:08 -0400, Chris Mason wrote:
> 
> > Now, on to the patch.  I pushed some code around and narrowed the
> > problem down to select_idle_sibling()   We have cores going into and out
> > of idle fast enough that even this cut our latencies in half:
> 
> Are you using NO_HZ?  If so, you may want to try the attached.

I'll definitely give it a shot.  When I tried using the nohz idle bitmap
(Peter's idea) instead of the for_each_cpu() walks, it came out slower.
It feels like the cpus aren't getting all the way down into the idle
loop before more work comes, but I'll have to check.

> 
> > static int select_idle_sibling(struct task_struct *p, int target)
> >                                 goto next;
> >  
> >                         for_each_cpu(i, sched_group_cpus(sg)) {
> > -                               if (i == target || !idle_cpu(i))
> > +                               if (!idle_cpu(i))
> >                                         goto next;
> >                         }
> >  
> > IOW, by the time we get down to for_each_cpu(), the idle_cpu() check
> > done at the top of the function is no longer valid.
> 
> Ok, that's only an optimization, could go if it's causing trouble.

It's more an indication of how long we're spending in the current scan.
Long enough for the tests we're currently doing to be inaccurate.

[ my beautiful patch ]

> Ew.  That may improve your latency is everything load, but worst case
> package walk will hurt like hell on CPUs with insane number of threads.
>
>  That full search also turns the evil face of two-faced little
> select_idle_sibling() into it's only face, the one that bounces tasks
> about much more than they appreciate.
> 
> Looking for an idle core first delivers the most throughput boost, and
> only looking at target's threads if you don't find one keeps the bounce
> and traverse pain down to a dull roar, while at least trying to get
> that latency win.  To me, your patch looks like it trades harm to many,
> for good for a few.

Yes, I'm tossing an important optimization.  The goal wasn't to get rid
of that at all, but instead to find a way to get both.  I just ran out
of ideas ;)

> 
> A behavior switch would be better.  It can't get any dumber, but trying
> to make it smarter makes it too damn fat.  As it sits, it's aiming in
> the general direction of the bullseye.. and occasionally hits the wall.
> 
> 	-Mike
>
> sched: ratelimit nohz
> 
> Entering nohz code on every micro-idle is too expensive to bear.

This I really like.  I'll setup a benchmark in production with it and
come back with results.

-chris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ