lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e2957e5bc071480889f4e1aa32b9cdea@AcuMS.aculab.com>
Date:   Thu, 28 Sep 2023 14:33:19 +0000
From:   David Laight <David.Laight@...LAB.COM>
To:     'Mathieu Desnoyers' <mathieu.desnoyers@...icios.com>,
        'Peter Zijlstra' <peterz@...radead.org>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Thomas Gleixner" <tglx@...utronix.de>,
        "Paul E . McKenney" <paulmck@...nel.org>,
        Boqun Feng <boqun.feng@...il.com>,
        "H . Peter Anvin" <hpa@...or.com>, "Paul Turner" <pjt@...gle.com>,
        "linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
        Christian Brauner <brauner@...nel.org>,
        "Florian Weimer" <fw@...eb.enyo.de>,
        "carlos@...hat.com" <carlos@...hat.com>,
        "Peter Oskolkov" <posk@...k.io>,
        Alexander Mikhalitsyn <alexander@...alicyn.com>,
        Chris Kennelly <ckennelly@...gle.com>,
        Ingo Molnar <mingo@...hat.com>,
        "Darren Hart" <dvhart@...radead.org>,
        Davidlohr Bueso <dave@...olabs.net>,
        André Almeida <andrealmeid@...lia.com>,
        "libc-alpha@...rceware.org" <libc-alpha@...rceware.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Jonathan Corbet <corbet@....net>,
        Noah Goldstein <goldstein.w.n@...il.com>,
        Daniel Colascione <dancol@...gle.com>,
        "longman@...hat.com" <longman@...hat.com>,
        Florian Weimer <fweimer@...hat.com>
Subject: RE: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq

From: Mathieu Desnoyers
> Sent: 28 September 2023 14:21
> 
> On 9/28/23 07:22, David Laight wrote:
> > From: Peter Zijlstra
> >> Sent: 28 September 2023 11:39
> >>
> >> On Mon, May 29, 2023 at 03:14:13PM -0400, Mathieu Desnoyers wrote:
> >>> Expose the "on-cpu" state for each thread through struct rseq to allow
> >>> adaptative mutexes to decide more accurately between busy-waiting and
> >>> calling sys_futex() to release the CPU, based on the on-cpu state of the
> >>> mutex owner.
> >
> > Are you trying to avoid spinning when the owning process is sleeping?
> 
> Yes, this is my main intent.
> 
> > Or trying to avoid the system call when it will find that the futex
> > is no longer held?
> >
> > The latter is really horribly detremental.
> 
> That's a good questions. What should we do in those three situations
> when trying to grab the lock:
> 
> 1) Lock has no owner
> 
> We probably want to simply grab the lock with an atomic instruction. But
> then if other threads are queued on sys_futex and did not manage to grab
> the lock yet, this would be detrimental to fairness.
> 
> 2) Lock owner is running:
> 
> The lock owner is certainly running on another cpu (I'm using the term
> "cpu" here as logical cpu).
> 
> I guess we could either decide to bypass sys_futex entirely and try to
> grab the lock with an atomic, or we go through sys_futex nevertheless to
> allow futex to guarantee some fairness across threads.

I'd not worry about 'fairness'.
If the mutex is that contended you've already lost!

I had a big problem trying to avoid the existing 'fairness' code.
Consider 30 RT threads blocked in cv_wait() on the same condvar.
Something does cv_broadcast() and you want them all to wakeup.
They'll all release the mutex pretty quickly - it doesn't matter is they spin.
But what actually happens is one thread is woken up.
Once it has been scheduled (after the cpu has come out of a sleep state
and/or any hardware interrupts completed (etc) then next thread is woken.
If you are lucky it'll 'only' take a few ms to get them all running.
Not good when you are trying to process audio every 10ms.
I had to use a separate cv for each thread and get the woken threads
to help with the wakeups. Gog knows what happens with 256 threads!

> 3) Lock owner is sleeping:
> 
> The lock owner may be either tied to the same cpu as the requester, or a
> different cpu. Here calling FUTEX_WAIT and friends is pretty much required.

You'd need the 'holding process is sleeping' test to be significantly
faster then the 'optimistic spin hoping the mutex will be released'.
And for the 'spin' to be longer than the syscall time for futex.
Otherwise you are optimising an already slow path.
If the thread is going to have to sleep until the thread that owns
a mutex wakes up then I can't imagine performance mattering.

OTOH it is much more usual for the owning thread to be running and
release the mutex quickly.

I wouldn't have thought it was really worth optimising for the
'lock owner is sleeping' case.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ