[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d11b22b-2fb5-7dea-f88b-b32f1576a5e0@redhat.com>
Date: Mon, 3 Feb 2020 09:59:12 -0500
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Alex Kogan <alex.kogan@...cle.com>
Cc: linux@...linux.org.uk, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>,
Arnd Bergmann <arnd@...db.de>, linux-arch@...r.kernel.org,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, hpa@...or.com, x86@...nel.org,
Hanjun Guo <guohanjun@...wei.com>,
Jan Glauber <jglauber@...vell.com>,
Steven Sistare <steven.sistare@...cle.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
dave.dice@...cle.com
Subject: Re: [PATCH v8 4/5] locking/qspinlock: Introduce starvation avoidance
into CNA
On 2/3/20 8:45 AM, Peter Zijlstra wrote:
> On Thu, Jan 30, 2020 at 05:05:28PM -0500, Alex Kogan wrote:
>>> On Jan 25, 2020, at 6:19 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>>>
>>> On Fri, Jan 24, 2020 at 01:19:05PM -0500, Alex Kogan wrote:
>>>
>>>> Is there a lightweight way to identify such a “prioritized” thread?
>>> No; people might for instance care about tail latencies between their
>>> identically spec'ed worker tasks.
>> I would argue that those users need to tune/reduce the intra-node handoff
>> threshold for their needs. Or disable CNA altogether.
> I really don't like boot time arguments (or tunables in generic) for a
> machine to work as it should.
>
> The default really should 'just work'.
That will be the ideal case. In reality, it usually takes a while for
the code to mature enough to do some kind of self tuning. In the mean
time, having some configuration options available allows us to have more
time to figure what the best configuration options to be.
>> In general, Peter, seems like you are not on board with the way Longman
>> suggested to handle prioritized threads. Am I right?
> Right.
>
> Presumably you have a workload where CNA is actually a win? That is,
> what inspired you to go down this road? Which actual kernel lock is so
> contended on NUMA machines that we need to do this?
Today, a 2-socket Rome server can have 128 cores and 256 threads. If we
scale up more, we could easily have more than 1000 threads in a system.
With that many logical cpus available, it is easy to envision some heavy
spinlock contention can happen fairly regularly. This patch can
alleviate the congestion and improve performance under that
circumstance. Of course, the specific locks that are contended will
depend on the workloads.
Cheers,
Longman
Powered by blists - more mailing lists