[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150528122931.GA8592@gmail.com>
Date: Thu, 28 May 2015 14:29:31 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Mike Galbraith <mgalbraith@...ell.com>,
Josef Bacik <jbacik@...com>, riel@...hat.com, mingo@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for
BALANCE_WAKE
* Peter Zijlstra <peterz@...radead.org> wrote:
> > On Thu, 2015-05-28 at 13:49 +0200, Ingo Molnar wrote:
>
> > > What's the biggest you've seen?
>
> Wikipedia here: http://en.wikipedia.org/wiki/Haswell_%28microarchitecture%29
>
> Tell us HSW-E[PX] have 18 cores 36 thread SKUs.
>
> But yes, what Mike says, its bound to only get bigger.
So it's starting to get big enough to warrant an optimization of the way we
account and discover idle CPUs:
So when a CPU goes idle, it has idle cycles it could spend on registering itself
in either an idle-CPUs bitmap, or in an idle-CPUs queue. The queue (or bitmap)
would strictly be only shared between CPUs within the same domain, so the cache
bouncing cost from that is still small and package-local. (We remote access
overhead in select_idle_sibling() already, due to having to access half of all
remote rqs on average.)
Such an approach would make select_idle_sibling() independent on the size of the
cores domain, it would make it essentially O(1).
( There's a bit of a complication with rq->wake_list, but I think it would be good
enough to just register/unregister from the idle handler, if something is idle
only short term it should probably not be considered for SMP balancing. )
But I'd definitely not go towards making our SMP balancing macro idle selection
decisions poorer, just because our internal implementation is
O(nr_cores_per_package) ...
Agreed?
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists