[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ef39143ad24743008a896d2a09da1066@AcuMS.aculab.com>
Date: Thu, 28 Sep 2023 11:22:55 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Peter Zijlstra' <peterz@...radead.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Thomas Gleixner" <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
"H . Peter Anvin" <hpa@...or.com>, "Paul Turner" <pjt@...gle.com>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
Christian Brauner <brauner@...nel.org>,
"Florian Weimer" <fw@...eb.enyo.de>,
"carlos@...hat.com" <carlos@...hat.com>,
"Peter Oskolkov" <posk@...k.io>,
Alexander Mikhalitsyn <alexander@...alicyn.com>,
Chris Kennelly <ckennelly@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
"Darren Hart" <dvhart@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>,
André Almeida <andrealmeid@...lia.com>,
"libc-alpha@...rceware.org" <libc-alpha@...rceware.org>,
Steven Rostedt <rostedt@...dmis.org>,
Jonathan Corbet <corbet@....net>,
Noah Goldstein <goldstein.w.n@...il.com>,
Daniel Colascione <dancol@...gle.com>,
"longman@...hat.com" <longman@...hat.com>,
Florian Weimer <fweimer@...hat.com>
Subject: RE: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq
From: Peter Zijlstra
> Sent: 28 September 2023 11:39
>
> On Mon, May 29, 2023 at 03:14:13PM -0400, Mathieu Desnoyers wrote:
> > Expose the "on-cpu" state for each thread through struct rseq to allow
> > adaptative mutexes to decide more accurately between busy-waiting and
> > calling sys_futex() to release the CPU, based on the on-cpu state of the
> > mutex owner.
Are you trying to avoid spinning when the owning process is sleeping?
Or trying to avoid the system call when it will find that the futex
is no longer held?
The latter is really horribly detremental.
> >
> > It is only provided as an optimization hint, because there is no
> > guarantee that the page containing this field is in the page cache, and
> > therefore the scheduler may very well fail to clear the on-cpu state on
> > preemption. This is expected to be rare though, and is resolved as soon
> > as the task returns to user-space.
> >
> > The goal is to improve use-cases where the duration of the critical
> > sections for a given lock follows a multi-modal distribution, preventing
> > statistical guesses from doing a good job at choosing between busy-wait
> > and futex wait behavior.
>
> As always, are syscalls really *that* expensive? Why can't we busy wait
> in the kernel instead?
>
> I mean, sure, meltdown sucked, but most people should now be running
> chips that are not affected by that particular horror show, no?
IIRC 'page table separation' which is what makes system calls expensive
is only a compile-time option. So is likely to be enabled on any 'distro'
kernel.
But a lot of other mitigations (eg RSB stuffing) are also pretty detrimental.
OTOH if you have a 'hot' userspace mutex you are going to lose whatever.
All that needs to happen is for a ethernet interrupt to decide to discard
completed transmits and refill the rx ring, and then for the softint code
to free a load of stuff deferred by rcu while you've grabbed the mutex
and no matter how short the user-space code path the mutex won't be
released for absolutely ages.
I had to change a load of code to use arrays and atomic increments
to avoid delays acquiring mutex.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists