[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <40b76cbd00d640e49f727abbd0c39693@AcuMS.aculab.com>
Date: Thu, 28 Sep 2023 15:51:47 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Steven Rostedt' <rostedt@...dmis.org>,
Peter Zijlstra <peterz@...radead.org>
CC: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Thomas Gleixner" <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
"H . Peter Anvin" <hpa@...or.com>, "Paul Turner" <pjt@...gle.com>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
Christian Brauner <brauner@...nel.org>,
"Florian Weimer" <fw@...eb.enyo.de>,
"carlos@...hat.com" <carlos@...hat.com>,
"Peter Oskolkov" <posk@...k.io>,
Alexander Mikhalitsyn <alexander@...alicyn.com>,
Chris Kennelly <ckennelly@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
"Darren Hart" <dvhart@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>,
André Almeida <andrealmeid@...lia.com>,
"libc-alpha@...rceware.org" <libc-alpha@...rceware.org>,
Jonathan Corbet <corbet@....net>,
Noah Goldstein <goldstein.w.n@...il.com>,
Daniel Colascione <dancol@...gle.com>,
"longman@...hat.com" <longman@...hat.com>,
"Florian Weimer" <fweimer@...hat.com>
Subject: RE: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq
From: Steven Rostedt
> Sent: 28 September 2023 15:43
>
> On Thu, 28 Sep 2023 12:39:26 +0200
> Peter Zijlstra <peterz@...radead.org> wrote:
>
> > As always, are syscalls really *that* expensive? Why can't we busy wait
> > in the kernel instead?
>
> Yes syscalls are that expensive. Several years ago I had a good talk
> with Robert Haas (one of the PostgreSQL maintainers) at Linux Plumbers,
> and I asked him if they used futexes. His answer was "no". He told me
> how they did several benchmarks and it was a huge performance hit (and
> this was before Spectre/Meltdown made things much worse). He explained
> to me that most locks are taken just to flip a few bits. Going into the
> kernel and coming back was orders of magnitude longer than the critical
> sections. By going into the kernel, it caused a ripple effect and lead
> to even more contention. There answer was to implement their locking
> completely in user space without any help from the kernel.
That matches what I found with code that was using a mutex to take
work items off a global list.
Although the mutex was only held for a few instructions (probably
several 100 because the list wasn't that well written), what happened
was that as soon as there was any contention (which might start
with a hardware interrupt) performance when through the floor.
The fix was to replace the linked list with and array and use
atomic add to 'grab' blocks of entries.
(Even the atomic operations slowed things down.)
> This is when I thought that having an adaptive spinner that could get
> hints from the kernel via memory mapping would be extremely useful.
Did you consider writing a timestamp into the mutex when it was
acquired - or even as the 'acquired' value?
A 'moderately synched TSC' should do.
Then the waiter should be able to tell how long the mutex
has been held for - and then not spin if it had been held ages.
> The obvious problem with their implementation is that if the owner is
> sleeping, there's no point in spinning. Worse, the owner may even be
> waiting for the spinner to get off the CPU before it can run again. But
> according to Robert, the gain in the general performance greatly
> outweighed the few times this happened in practice.
Unless you can use atomics (ok for bits and linked lists) you
always have the problem that userspace can't disable interrupts.
So, unlike the kernel, you can't implement a proper spinlock.
I've NFI how CONFIG_RT manages to get anything done with all
the spinlocks replaced by sleep locks.
Clearly there are a spinlocks that are held for far too long.
But you really do want to spin most of the time.
...
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists