[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5579F083.1000609@fb.com>
Date: Thu, 11 Jun 2015 16:33:07 -0400
From: Josef Bacik <jbacik@...com>
To: Peter Zijlstra <peterz@...radead.org>
CC: <riel@...hat.com>, <mingo@...hat.com>,
<linux-kernel@...r.kernel.org>, <umgwanakikbuti@...il.com>,
<morten.rasmussen@....com>, kernel-team <Kernel-team@...com>
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for
BALANCE_WAKE
On 05/28/2015 07:05 AM, Peter Zijlstra wrote:
>
> So maybe you want something like the below; that cures the thing Morten
> raised, and we continue looking for sd, even after we found affine_sd.
>
> It also avoids the pointless idle_cpu() check Mike raised by making
> select_idle_sibling() return -1 if it doesn't find anything.
>
> Then it continues doing the full balance IFF sd was set, which is keyed
> off of sd->flags.
>
> And note (as Mike already said), BALANCE_WAKE does _NOT_ look for idle
> CPUs, it looks for the least loaded CPU. And its damn expensive.
>
>
> Rewriting this entire thing is somewhere on the todo list :/
>
Ugh I'm sorry, I've been running tests trying to get the numbers to look
good when I noticed I was getting some inconsistencies in my results.
Turns out I never actually tested your patch just plain, I had been
testing it with BALANCE_WAKE, because I was under the assumption that
was what was best for our workload. Since then I had fixed all of our
scripts and such and noticed that it actually super duper sucks for us.
So testing with this original patch everything is significantly better
(this is with the default SD flags set, no changes at all).
So now that I've wasted a good bit of my time and everybody elses, can
we go about pushing this patch upstream? If you are happy with it the
way it is I'll go ahead and pull it into our kernels and just watch to
make sure it ends upstream at some point. Thanks,
Josef
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists