lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 28 Mar 2022 11:11:31 -0400
From:   Waiman Long <longman@...hat.com>
To:     David Hildenbrand <david@...hat.com>,
        Hillf Danton <hdanton@...a.com>
Cc:     Peter Zijlstra <peterz@...radead.org>, MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] locking/rwsem: dont wake up wwaiter in case of lock holder

On 3/28/22 10:18, David Hildenbrand wrote:
> On 26.03.22 14:40, Hillf Danton wrote:
>> In the slowpath of down for write, we bail out in case of signal received and
>> try to wake up any pending waiter but it makes no sense to wake up a write
>> waiter given any lock holder, either write or read.
> But is handling this better really worth additional code and runtime
> checks? IOW, does this happen often enough that we actually care about
> optimizing this? I have no idea :)
>
>> The RFC is do nothing for wwaiter if any lock holder present - they will fill
>> their duty at lock release time.
>>
>> Only for thoughts now.
>>
>> Hillf
>>
>> --- x/kernel/locking/rwsem.c
>> +++ y/kernel/locking/rwsem.c
>> @@ -418,6 +418,8 @@ static void rwsem_mark_wake(struct rw_se
>>   	waiter = rwsem_first_waiter(sem);
>>   
>>   	if (waiter->type == RWSEM_WAITING_FOR_WRITE) {
>> +		if (RWSEM_LOCK_MASK & atomic_long_read(&sem->count))
>> +			return;
>>   		if (wake_type == RWSEM_WAKE_ANY) {
>>   			/*
>>   			 * Mark writer at the front of the queue for wakeup.
>> --

That check isn't good enough. First of all, any reader count in 
sem->count can be transient due to the fact that we do an unconditional 
atomic_long_add() on down_read(). The reader may then remove its reader 
count in the slow path. This patch may cause missed wakeup which is a 
much bigger problem than spending a bit of cpu time to check for lock 
availability and sleep again.

The write lock bit, however, is real. We do support the first writer in 
the wait queue to spin on the lock when the handoff bit is set. So 
waking up a writer when the rwsem is currently write-locked can still be 
useful.

BTW, I didn't see this RFC patch in LKML. Is it only posted on linux-mm 
originally?

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ