[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2c8aabab-1bb0-0708-94c4-305fd860609e@redhat.com>
Date: Thu, 31 Jan 2019 12:38:57 -0500
From: Waiman Long <longman@...hat.com>
To: Alex Kogan <alex.kogan@...cle.com>, linux@...linux.org.uk,
peterz@...radead.org, mingo@...hat.com, will.deacon@....com,
arnd@...db.de, linux-arch@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc: steven.sistare@...cle.com, daniel.m.jordan@...cle.com,
dave.dice@...cle.com, rahul.x.yadav@...cle.com
Subject: Re: [PATCH 2/3] locking/qspinlock: Introduce CNA into the slow path
of qspinlock
On 01/30/2019 10:01 PM, Alex Kogan wrote:
> In CNA, spinning threads are organized in two queues, a main queue for
> threads running on the same socket as the current lock holder, and a
> secondary queue for threads running on other sockets. For details,
> see https://arxiv.org/abs/1810.05600.
>
> Note that this variant of CNA may introduce starvation by continuously
> passing the lock to threads running on the same socket. This issue
> will be addressed later in the series.
>
> Signed-off-by: Alex Kogan <alex.kogan@...cle.com>
> Reviewed-by: Steve Sistare <steven.sistare@...cle.com>
Just wondering if you have tried include PARVIRT_SPINLOCKS option to see
if that patch may screw up the PV qspinlock code.
Anyway, I do believe your claim that NUMA-aware qspinlock is good for
large systems with many nodes. However, all these extra code are
overhead for small systems that have a single node/socket, for instance.
I will support doing something similar to what had been done to support
PV qspinlock. IOW, a separate slowpath function that can be patched to
become the default depending on the system being run on or a kernel boot
option setting.
I would like to keep the core slowpath function simple and easy to
understand. So most of the CNA code should be encapsulated into some
helper functions and put into a separated file.
Thanks,
Longman
Powered by blists - more mailing lists