lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 05 Oct 2009 11:26:43 +0800
From:	Amerigo Wang <amwang@...hat.com>
To:	David Howells <dhowells@...hat.com>
CC:	linux-kernel@...r.kernel.org,
	Brian Behlendorf <behlendorf1@...l.gov>,
	Ben Woodard <bwoodard@...l.gov>,
	Stable Team <stable@...nel.org>, akpm@...ux-foundation.org
Subject: Re: [Patch] rwsem: fix rwsem_is_locked() bug

David Howells wrote:
> Amerigo Wang <amwang@...hat.com> wrote:
> 
>> rwsem_is_locked() tests ->activity without locks, so we should always
>> keep ->activity consistent. However, the code in __rwsem_do_wake()
>> breaks this rule, it updates ->activity after _all_ readers waken up,
>> this may give some reader a wrong ->activity value, thus cause
>> rwsem_is_locked() behaves wrong.
> 
> NAK.
> 
> This does not fix the case where the active readers run out, but there's a
> writer on the queue (see __up_read()), nor the case where the active writer
> ends, but there's a waiter on the queue (see __up_write()).  In both cases,
> the lock is still held, though sem->activity is 0.


Hmm, so the algorithm used in rwsem_is_locked() is not right.:-/

> 
> I'm leary of endorsing the presence of rwsem_is_locked() since, unless the
> function calling it knows that the process it is running in has the rwsem
> locked, the value is obsolete the moment the test has been performed.
> 
> The other problem with this change is that it has the potential to cause more
> cacheline ping-pong under contention.  That said, contention on an rwsem is
> much less likely, I think, than on, say, a spinlock, so this change shouldn't
> cause a significant slowdown.
> 
> Your patch would probably be better as:
> 
> -	woken = 0;
> +	woken = ++sem->activity;
> 	while (waiter->flags & RWSEM_WAITING_FOR_READ) {
> 		struct list_head *next = waiter->list.next;
> 
> 		list_del(&waiter->list);
> 		tsk = waiter->task;
> 		smp_mb();
> 		waiter->task = NULL;
> 		wake_up_process(tsk);
> 		put_task_struct(tsk);
> 		woken++;
> 		if (list_empty(&sem->wait_list))
> 			break;
> 		waiter = list_entry(next, struct rwsem_waiter, list);
> 	}
> 
> -	sem->activity += woken;
> +	sem->activity = woken;
> 
> However, as I said above, that is not sufficient.  You really do need to put
> spinlocks in rwsem_is_locked():
> 
> 	static inline int rwsem_is_locked(struct rw_semaphore *sem)
> 	{
> 		unsigned long flags;
> 		__s32 activity;
> 
> 		spin_lock_irqsave(&sem->wait_lock, flags);
> 		activity = sem->activity;
> 		spin_unlock_irqrestore(&sem->wait_lock, flags);
> 		return activity != 0;
> 	}


Sure, adding spinlocks can solve this, but that would be expensive,
wouldn't it?

> 
> You also need to check over lib/rwsem.c.  rwsem_is_locked() is unreliable for
> that algorithm.

Yeah, I agree, I will try another fix.

Thank you!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists