lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240416155014.GB12673@noisy.programming.kicks-ass.net>
Date: Tue, 16 Apr 2024 17:50:14 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Elizabeth Figura <zfigura@...eweavers.com>
Cc: Arnd Bergmann <arnd@...db.de>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Jonathan Corbet <corbet@....net>, Shuah Khan <shuah@...nel.org>,
	linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
	wine-devel@...ehq.org,
	André Almeida <andrealmeid@...lia.com>,
	Wolfram Sang <wsa@...nel.org>,
	Arkadiusz Hiler <ahiler@...eweavers.com>,
	Andy Lutomirski <luto@...nel.org>, linux-doc@...r.kernel.org,
	linux-kselftest@...r.kernel.org,
	Randy Dunlap <rdunlap@...radead.org>,
	Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
	Waiman Long <longman@...hat.com>, Boqun Feng <boqun.feng@...il.com>
Subject: Re: [PATCH v4 00/30] NT synchronization primitive driver

On Tue, Apr 16, 2024 at 10:14:21AM +0200, Peter Zijlstra wrote:

> > Some aspects of the implementation may deserve particular comment:
> > 
> > * In the interest of performance, each object is governed only by a single
> >   spinlock. However, NTSYNC_IOC_WAIT_ALL requires that the state of multiple
> >   objects be changed as a single atomic operation. In order to achieve this, we
> >   first take a device-wide lock ("wait_all_lock") any time we are going to lock
> >   more than one object at a time.
> > 
> >   The maximum number of objects that can be used in a vectored wait, and
> >   therefore the maximum that can be locked simultaneously, is 64. This number is
> >   NT's own limit.

AFAICT:

	spin_lock(&dev->wait_all_lock);
	  list_for_each_entry(entry, &obj->all_waiters, node)
	    for (i=0; i<count; i++)
	      spin_lock_nest_lock(q->entries[i].obj->lock, &dev->wait_all_lock);

Where @count <= NTSYNC_MAX_WAIT_COUNT.

So while this nests at most 65 spinlocks, there is no actual bound on
the amount of nested lock sections in total. That is, all_waiters list
can be grown without limits.

Can we pretty please make wait_all_lock a mutex ?

> >   The acquisition of multiple spinlocks will degrade performance. This is a
> >   conscious choice, however. Wait-for-all is known to be a very rare operation
> >   in practice, especially with counts that approach the maximum, and it is the
> >   intent of the ntsync driver to optimize wait-for-any at the expense of
> >   wait-for-all as much as possible.

Typical sane usage is a good guide for performance, but you must not
forget about malicious userspace and what they can do on purpose to mess
you up.


Anyway, let me stare more at all this....

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ