[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231002125109.55c35030@gandalf.local.home>
Date: Mon, 2 Oct 2023 12:51:09 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: David Laight <David.Laight@...LAB.COM>
Cc: Peter Zijlstra <peterz@...radead.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Thomas Gleixner" <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
"H . Peter Anvin" <hpa@...or.com>, "Paul Turner" <pjt@...gle.com>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
Christian Brauner <brauner@...nel.org>,
"Florian Weimer" <fw@...eb.enyo.de>,
"carlos@...hat.com" <carlos@...hat.com>,
"Peter Oskolkov" <posk@...k.io>,
Alexander Mikhalitsyn <alexander@...alicyn.com>,
Chris Kennelly <ckennelly@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
"Darren Hart" <dvhart@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>,
André Almeida <andrealmeid@...lia.com>,
"libc-alpha@...rceware.org" <libc-alpha@...rceware.org>,
Jonathan Corbet <corbet@....net>,
Noah Goldstein <goldstein.w.n@...il.com>,
Daniel Colascione <dancol@...gle.com>,
"longman@...hat.com" <longman@...hat.com>,
"Florian Weimer" <fweimer@...hat.com>
Subject: Re: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq
On Thu, 28 Sep 2023 15:51:47 +0000
David Laight <David.Laight@...LAB.COM> wrote:
> > This is when I thought that having an adaptive spinner that could get
> > hints from the kernel via memory mapping would be extremely useful.
>
> Did you consider writing a timestamp into the mutex when it was
> acquired - or even as the 'acquired' value?
> A 'moderately synched TSC' should do.
> Then the waiter should be able to tell how long the mutex
> has been held for - and then not spin if it had been held ages.
And what heuristic would you use. My experience with picking "time to spin"
may work for one workload but cause major regressions in another workload.
I came to the conclusion to "hate" heuristics and NACK them whenever
someone suggested adding them to the rt_mutex in the kernel (back before
adaptive mutexes were introduced).
>
> > The obvious problem with their implementation is that if the owner is
> > sleeping, there's no point in spinning. Worse, the owner may even be
> > waiting for the spinner to get off the CPU before it can run again. But
> > according to Robert, the gain in the general performance greatly
> > outweighed the few times this happened in practice.
>
> Unless you can use atomics (ok for bits and linked lists) you
> always have the problem that userspace can't disable interrupts.
> So, unlike the kernel, you can't implement a proper spinlock.
Why do you need to disable interrupts? If you know the owner is running on
the CPU, you know it's not trying to run on the CPU that is acquiring the
lock. Heck, there's normal spin locks outside of PREEMPT_RT that do not
disable interrupts. The only time you need to disable interrupts is if the
interrupt itself takes the spin lock, and that's just to prevent deadlocks.
>
> I've NFI how CONFIG_RT manages to get anything done with all
> the spinlocks replaced by sleep locks.
> Clearly there are a spinlocks that are held for far too long.
> But you really do want to spin most of the time.
It spins as long as the owner of the lock is running on the CPU. This is
what we are looking to get from this patch series for user space.
Back in 2007, we had an issue with scaling on SMP machines. The RT kernel
with the sleeping spin locks would start to exponentially slow down with
the more CPUs you had. Once we hit more than 16 CPUs, the time to boot a
kernel took 10s of minutes to boot RT when the normal CONFIG_PREEMPT kernel
would only take a couple of minutes. The more CPUs you added, the worse it
became.
Then SUSE submitted a patch to have the rt_mutex spin only if the owner of
the mutex was still running on another CPU. This actually mimics a real
spin lock (because that's exactly what they do, they spin while the owner
is running on a CPU). The difference between a true spin lock and an
rt_mutex was that the spinner would stop spinning if the owner was
preempted (a true spin lock owner could not be preempted).
After applying the adaptive spinning, we were able to scale PREEMPT_RT to
any number of CPUs that the normal kernel could do with just a linear
performance hit.
This is why I'm very much interested in getting the same ability into user
space spin locks.
-- Steve
Powered by blists - more mailing lists