[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c872ddfb-eb12-e293-7ade-ef602ac45b1c@arm.com>
Date: Thu, 6 Dec 2018 17:28:37 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Steven Sistare <steven.sistare@...cle.com>, mingo@...hat.com,
peterz@...radead.org
Cc: subhra.mazumdar@...cle.com, dhaval.giani@...cle.com,
daniel.m.jordan@...cle.com, pavel.tatashin@...rosoft.com,
matt@...eblueprint.co.uk, umgwanakikbuti@...il.com,
riel@...hat.com, jbacik@...com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, quentin.perret@....com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 03/10] sched/topology: Provide cfs_overload_cpus bitmap
Hi Steve,
On 06/12/2018 16:40, Steven Sistare wrote:
> [...]
>>
>> Ah yes, that would work. Thing is, I had excluded having the misfit masks
>> being in the sd_llc_shareds, since from a logical standpoint they don't
>> really belong there.
>>
>> With asymmetric CPU capacities we kind of disregard the cache landscape
>
> Sure, but adding awareness of the cache hierarchy can only make it better,
> and a per-LLC mask organization can serve both the overloaded and misfit
> use cases quite naturally.
> [...]
>> So in truth I was envisioning separate SD_ASYM_CPUCAPACITY-based
>> sparsemasks, which is why I was rambling about SD_ASYM_CPUCAPACITY siblings
>> of sd_llc_*()... *But* after I had a go at it, it looked to me like that
>> was a lot of duplicated code.
>
> I would be happy to review your code and make suggestions to reduce duplication,
> and happy to continue to discuss clean and optimal handling for misfits. However,
> I have a request: can we push my patches across the finish line first? Stealing
> for misfits can be its own patch series. Please consider sending your reviewed-by
> for the next version of my series. I will send it soon.
>
Sure, as things stand right now I'm fairly convinced this doesn't harm
asymmetric systems.
The only thing I would add (ignoring misfits) is that with EAS we would
need to gate stealing with something like:
!static_branch_unlikely(&sched_energy_present) ||
READ_ONCE(rq->rd->overutilized)
And who "gets" to add this gating (or at least, when must it be added)
depends on which patch-set gets in first.
[...]
>> Sadly I think that doesn't work as well for cfs_overload_cpus since you
>> can't split a sparsemask's chunks over several NUMA nodes, so we'd be
>> stuck with an allocation on a single node (but we already do that in some
>> places, e.g. for nohz.idle_cpus_mask, so... Is it that bad?).
>
> It can be bad for high memory bandwidth workloads, as the sparsemasks will
> be displaced from cache and we incur remote memory latencies on next access.
>
Aye, I just caught up with the LPC videos and was about to reply here to
say that, all things considered, it's probably not such a good idea...
> - Steve
>
Powered by blists - more mailing lists