lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1365665447.19620.102.camel@marge.simpson.net>
Date:	Thu, 11 Apr 2013 09:30:47 +0200
From:	Mike Galbraith <efault@....de>
To:	Michael Wang <wangyun@...ux.vnet.ibm.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	LKML <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...nel.org>, Alex Shi <alex.shi@...el.com>,
	Namhyung Kim <namhyung@...nel.org>,
	Paul Turner <pjt@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
	Ram Pai <linuxram@...ibm.com>
Subject: Re: [PATCH] sched: wake-affine throttle

On Thu, 2013-04-11 at 14:01 +0800, Michael Wang wrote: 
> On 04/10/2013 05:22 PM, Michael Wang wrote:
> > Hi, Peter
> > 
> > Thanks for your reply :)
> > 
> > On 04/10/2013 04:51 PM, Peter Zijlstra wrote:
> >> On Wed, 2013-04-10 at 11:30 +0800, Michael Wang wrote:
> >>> | 15 GB   |      32 | 35918 |   | 37632 | +4.77% | 47923 | +33.42% |
> >>> 52241 | +45.45%
> >>
> >> So I don't get this... is wake_affine() once every milisecond _that_
> >> expensive?
> >>
> >> Seeing we get a 45%!! improvement out of once every 100ms that would
> >> mean we're like spending 1/3rd of our time in wake_affine()? that's
> >> preposterous. So what's happening?
> > 
> > Not all the regression was caused by overhead, adopt curr_cpu not
> > prev_cpu for select_idle_sibling() is a more important reason for the
> > regression of pgbench.
> > 
> > In other word, for pgbench, we waste time in wake_affine() and make the
> > wrong decision at most of the time, the previously patch show
> > wake_affine() do pull unrelated tasks together, that's good if current
> > cpu still cached hot data for wakee, but that's not the case of the
> > workload like pgbench.
> 
> Please let me know if I failed to express my thought clearly.
> 
> I know it's hard to figure out why throttle could bring so many benefit,
> since the wake-affine stuff is a black box with too many unmeasurable
> factors, but that's actually the reason why we finally figure out this
> throttle idea, not the approach like wakeup-buddy, although both of them
> help to stop the regression.

For that load, as soon as clients+server exceeds socket size, pull is
doomed to always be a guaranteed loser.  There simply is no way to win,
some tasks must drag their data cross node no matter what you do,
because there is one and only one source of data, so you can not
possibly do anything but harm by pulling or in any other way disturbing
task placement, because you will force tasks to re-heat their footprint
every time you migrate someone with zero benefit to offset cost.  That
is why the closer you get to completely killing all migration, the
better your throughput gets with this load.. you're killing the cost of
migration in a situation there simply is no gain to be had.

That's why that wakeup-buddy thingy is a ~good idea.  It will allow 1:1
buddies that can and do benefit from motion to pair up and jabber in a
shared cache (though that motion needs slowing down too), _and_ detect
the case where wakeup migration is utterly pointless.  Just killing
wakeup migration OTOH should (I'd say very emphatic will) hurt pgbench
just as much, because spreading a smallish set which could share a cache
across several nodes hurts things like pgbench via misses just as much
as any other load.. it's just that once this load (or ilk) doesn't fit
in a node, you're absolutely screwed as far as misses go, you will eat
that because there simply is no other option.

Any migration is pointless for this thing once it exceeds socket size,
and fairness plays a dominant role, is absolutely not throughputs best
friend when any component of a load requires more CPU than the other
components, which very definitely is the case with pgbench.  Fairness
hurts this thing a lot.  That's why pgbench took a whopping huge hit
when I fixed up select_idle_sibling() to not completely rape fast/light
communicating tasks, it forced pgbench to face the consequences of a
fair scheduler, by cutting off the escape routes that searching for
_any_ even ever so briefly idle spot to place tasks such that wakeup
preemption just didn't happen, and when we failed to pull, we instead
did the very same thing on wakees original socket, thus providing
pgbench the fairness escape mechanism that it needs.

When you wake to idle cores, you do not have a nanosecond resolution
ultra fair scheduler, with the fairness price to be paid.. tasks run as
long as they want to run, or at least full ticks, which of course makes
the hard working load components a lot more productive.  Hogs can be
hogs.  For pgbench run in 1:N mode, the hardest working load component
is the mother of all work, the (singular) server.  Any time 'mom' is not
continuously working her little digital a$$ off to keep all those kids
fed, you have a performance problem on your hands, the entire load
stalls, lives and dies with one and only 'mom'.

-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ