[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ddd5d732-06da-f8f2-ba4a-686c58297e47@plexistor.com>
Date: Thu, 17 Sep 2020 13:51:43 +0300
From: Boaz Harrosh <boaz@...xistor.com>
To: Hou Tao <houtao1@...wei.com>, peterz@...radead.org,
Oleg Nesterov <oleg@...hat.com>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>
Cc: Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>,
Christoph Lameter <cl@...ux.com>, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, Jan Kara <jack@...e.cz>
Subject: Re: [RFC PATCH] locking/percpu-rwsem: use this_cpu_{inc|dec}() for
read_count
On 16/09/2020 15:32, Hou Tao wrote:
<>
> However the performance degradation is huge under aarch64 (4 sockets, 24 core per sockets): nearly 60% lost.
>
> v4.19.111
> no writer, reader cn | 24 | 48 | 72 | 96
> the rate of down_read/up_read per second | 166129572 | 166064100 | 165963448 | 165203565
> the rate of down_read/up_read per second (patched) | 63863506 | 63842132 | 63757267 | 63514920
>
I believe perhaps Peter Z's suggestion of an additional
percpu_down_read_irqsafe() API and let only those in IRQ users pay the
penalty.
Peter Z wrote:
> My leading alternative was adding: percpu_down_read_irqsafe() /
> percpu_up_read_irqsafe(), which use local_irq_save() instead of
> preempt_disable().
Thanks
Boaz
Powered by blists - more mailing lists