[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190207095639.GA32494@hirez.programming.kicks-ass.net>
Date: Thu, 7 Feb 2019 10:56:39 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Valentin Schneider <valentin.schneider@....com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
vincent.guittot@...aro.org, morten.rasmussen@....com,
Dietmar.Eggemann@....com
Subject: Re: [PATCH 5/5] sched/fair: Skip LLC nohz logic for asymmetric
systems
On Wed, Feb 06, 2019 at 05:26:06PM +0000, Valentin Schneider wrote:
> Hi,
>
> On 06/02/2019 16:14, Peter Zijlstra wrote:
> [...]
> >> @@ -9545,6 +9545,17 @@ static void nohz_balancer_kick(struct rq *rq)
> >> }
> >>
> >> rcu_read_lock();
> >> +
> >> + if (static_branch_unlikely(&sched_asym_cpucapacity))
> >> + /*
> >> + * For asymmetric systems, we do not want to nicely balance
> >> + * cache use, instead we want to embrace asymmetry and only
> >> + * ensure tasks have enough CPU capacity.
> >> + *
> >> + * Skip the LLC logic because it's not relevant in that case.
> >> + */
> >> + goto check_capacity;
> >> +
> >> sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
> >> if (sds) {
> >> /*
> >
> > Since (before this) the actual order of the various tests doesn't
> > matter, it's a logical cascade of conditions for which to KICK_MASK.
> >
>
> Ah, I assumed the order did matter somewhat with the "cheaper" LLC check
> first and the more costly loops further down in case we are still looking
> for a reason to do a kick.
I did not in fact consider that; I only looked at the logical structure
of the thing. You might want to double check :-)
> > We can easily reorder and short-circuit the cascase like so, no?
> >
> > The only concern is if sd_llc_shared < sd_asym_capacity; in which case
> > we just lost a balance opportunity. Not sure how to best retain that
> > though.
> >
>
> I'm afraid I don't follow - we don't lose a balance opportunity with the
> below change (compared to the original patch), do we?
What if each big/little cluster would have multiple cache domains? Would
we not want to spread the cache usage inside the big/little resp. ?
Powered by blists - more mailing lists