lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 11 May 2022 10:30:36 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Waiman Long <longman@...hat.com>
Cc:     Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
        Boqun Feng <boqun.feng@...il.com>,
        Arnd Bergmann <arnd@...db.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] locking/qrwlock: Reduce cacheline contention for
 rwlocks used in interrupt context

On Tue, May 10, 2022 at 03:21:34PM -0400, Waiman Long wrote:
> Even though qrwlock is supposed to be a fair lock, it does allow readers
> from interrupt context to spin on the lock until it can acquire it making
> it not as fair. This exception was added due to the requirement to allow
> recursive read lock in interrupt context. This can also be achieved by
> just ignoring the writer waiting bit without spinning on the lock.
> 
> By making this change, we make qrwlock a bit more fair and eliminating
> the problem of cacheline bouncing for rwlocks that are used heavily in
> interrupt context, like the networking stack. This should also reduce
> the chance of lock starvation for those interrupt context rwlocks.

> diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
> index 2e1600906c9f..d52d13e95600 100644
> --- a/kernel/locking/qrwlock.c
> +++ b/kernel/locking/qrwlock.c
> @@ -18,21 +18,16 @@
>   * queued_read_lock_slowpath - acquire read lock of a queued rwlock
>   * @lock: Pointer to queued rwlock structure
>   */
> -void queued_read_lock_slowpath(struct qrwlock *lock)
> +void queued_read_lock_slowpath(struct qrwlock *lock, int cnts)
>  {
>  	/*
> -	 * Readers come here when they cannot get the lock without waiting
> +	 * Readers come here when they cannot get the lock without waiting.
> +	 * Readers in interrupt context can steal the lock immediately
> +	 * if the writer is just waiting (not holding the lock yet).
>  	 */
> -	if (unlikely(in_interrupt())) {
> -		/*
> -		 * Readers in interrupt context will get the lock immediately
> -		 * if the writer is just waiting (not holding the lock yet),
> -		 * so spin with ACQUIRE semantics until the lock is available
> -		 * without waiting in the queue.
> -		 */
> -		atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
> +	if (unlikely(!(cnts & _QW_LOCKED) && in_interrupt()))
>  		return;
> -	}
> +
>  	atomic_sub(_QR_BIAS, &lock->cnts);
>  
>  	trace_contention_begin(lock, LCB_F_SPIN | LCB_F_READ);

I'm confused; prior to this change:

	CPU0			CPU1

	write_lock_irq(&l)
				read_lock(&l)
				<INRQ>
				  read_lock(&l)
				  ...

was not deadlock, but now it would AFAICT.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ