lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 27 Dec 2023 19:07:27 +0800
From: Hillf Danton <hdanton@...a.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
	Maria Yu <quic_aiquny@...cinc.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] kernel: Introduce a write lock/unlock wrapper for tasklist_lock

On Tue, 26 Dec 2023 20:49:38 +0000 Matthew Wilcox <willy@...radead.org>
> On Tue, Dec 26, 2023 at 06:46:52PM +0800, Hillf Danton wrote:
> > On Wed, 13 Dec 2023 12:27:05 -0600 Eric W. Biederman <ebiederm@...ssion.com>
> > > Matthew Wilcox <willy@...radead.org> writes:
> > > > I think the right way to fix this is to pass a boolean flag to
> > > > queued_write_lock_slowpath() to let it know whether it can re-enable
> > > > interrupts while checking whether _QW_WAITING is set.
> > 
> > 	lock(&lock->wait_lock)
> > 	enable irq
> > 	int
> > 	lock(&lock->wait_lock)
> > 
> > You are adding chance for recursive locking.
> 
> Did you bother to read queued_read_lock_slowpath() before writing this email?

Nope but it matters nothing in this case.
> 
>         if (unlikely(in_interrupt())) {
>                 /*
>                  * Readers in interrupt context will get the lock immediately
>                  * if the writer is just waiting (not holding the lock yet),
>                  * so spin with ACQUIRE semantics until the lock is available
>                  * without waiting in the queue.
>                  */
>                 atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
>                 return;

This is the lock acquirer for read in irq context, and it rolls out the red
carpet for write acquirer in irq, right Willy?

Feel free to ignore the following leg works.

	/* Set the waiting flag to notify readers that a writer is pending */
	atomic_or(_QW_WAITING, &lock->cnts);

	enable irq;

	/* When no more readers or writers, set the locked flag */
	do {
		cnts = atomic_cond_read_relaxed(&lock->cnts, VAL == _QW_WAITING);
	} while (!atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED));

	int
	atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
	deadlock
	disable irq;

Though the case below is safe, it looks not pretty but clumsy.

	/* Set the waiting flag to notify readers that a writer is pending */
	atomic_or(_QW_WAITING, &lock->cnts);

	/* When no more readers or writers, set the locked flag */
	do {
		enable irq;

		cnts = atomic_cond_read_relaxed(&lock->cnts, VAL == _QW_WAITING);

		disable irq;

	} while (!atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED));


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ