[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200121144229.GN14914@hirez.programming.kicks-ass.net>
Date: Tue, 21 Jan 2020 15:42:29 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Alex Kogan <alex.kogan@...cle.com>
Cc: linux@...linux.org.uk, mingo@...hat.com, will.deacon@....com,
arnd@...db.de, longman@...hat.com, linux-arch@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
tglx@...utronix.de, bp@...en8.de, hpa@...or.com, x86@...nel.org,
guohanjun@...wei.com, jglauber@...vell.com,
steven.sistare@...cle.com, daniel.m.jordan@...cle.com,
dave.dice@...cle.com
Subject: Re: [PATCH v8 3/5] locking/qspinlock: Introduce CNA into the slow
path of qspinlock
On Mon, Dec 30, 2019 at 02:40:40PM -0500, Alex Kogan wrote:
> +#define pv_wait_head_or_lock cna_pre_scan
Also inconsitent naming.
> +__always_inline u32 cna_pre_scan(struct qspinlock *lock,
> + struct mcs_spinlock *node)
> +{
> + struct cna_node *cn = (struct cna_node *)node;
> +
> + cn->pre_scan_result = cna_scan_main_queue(node, node);
> +
> + return 0;
> +}
The thinking here is that we're trying to make use of the time otherwise
spend spinning on atomic_cond_read_acquire(), to search for a potential
unlock candidate?
Surely that deserves a comment.
Powered by blists - more mailing lists