[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZGevZxOjJLMO9zlM@boqun-archlinux>
Date: Fri, 19 May 2023 10:18:31 -0700
From: Boqun Feng <boqun.feng@...il.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...nel.org>,
"H . Peter Anvin" <hpa@...or.com>, Paul Turner <pjt@...gle.com>,
linux-api@...r.kernel.org, Christian Brauner <brauner@...nel.org>,
Florian Weimer <fw@...eb.enyo.de>, David.Laight@...lab.com,
carlos@...hat.com, Peter Oskolkov <posk@...k.io>,
Alexander Mikhalitsyn <alexander@...alicyn.com>,
Chris Kennelly <ckennelly@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
Darren Hart <dvhart@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>,
André Almeida <andrealmeid@...lia.com>,
libc-alpha@...rceware.org, Steven Rostedt <rostedt@...dmis.org>,
Jonathan Corbet <corbet@....net>,
Florian Weimer <fweimer@...hat.com>
Subject: Re: [RFC PATCH 1/4] rseq: Add sched_state field to struct rseq
On Fri, May 19, 2023 at 10:15:11AM -0400, Mathieu Desnoyers wrote:
> On 2023-05-18 17:49, Boqun Feng wrote:
> > On Wed, May 17, 2023 at 11:26:51AM -0400, Mathieu Desnoyers wrote:
> [...]
> > > diff --git a/include/uapi/linux/rseq.h b/include/uapi/linux/rseq.h
> > > index c233aae5eac9..c6d8537e23ca 100644
> > > --- a/include/uapi/linux/rseq.h
> > > +++ b/include/uapi/linux/rseq.h
> > > @@ -37,6 +37,13 @@ enum rseq_cs_flags {
> > > (1U << RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT),
> > > };
> > > +enum rseq_sched_state {
> > > + /*
> > > + * Task is currently running on a CPU if bit is set.
> > > + */
> > > + RSEQ_SCHED_STATE_ON_CPU = (1U << 0),
> > > +};
> [...]
> > > + /*
> > > + * Restartable sequences sched_state field. Updated by the kernel. Read
> > > + * by user-space with single-copy atomicity semantics. This fields can
> > > + * be read by any userspace thread. Aligned on 32-bit. Contains a
> >
> > Maybe this is a premature optimization, but since most of the time the
> > bit would be read by another thread, does it make sense putting the
> > "sched_state" into a different cache line to avoid false sharing?
>
> I'm puzzled by your optimization proposal, so I'll say it outright: I'm
> probably missing something.
>
Maybe it's me who is missing something ;-)
> I agree that false-sharing would be an issue if various threads would
> contend for updating any field within this cache line.
>
> But the only thread responsible for updating this cache line's fields is the
> current thread, either from userspace (stores to rseq_abi->rseq_cs) or from
> the kernel (usually on return to userspace, except for this new ON_CPU bit
> clear on context switch).
>
> The other threads busy-waiting on the content of this sched_state field will
> only load from it, never store. And they will only busy-wait on it as long
But their loads can change the cache line state from "Exclusive" to
"Shared" (using MESI terms), right? And that could delay the stores of
the current thread.
> as the current task runs. When that task gets preempted, other threads will
> notice the flag change and use sys_futex instead.
>
> So the very worse I can think of in terms of pattern causing cache coherency
> traffic due to false-sharing is if the lock owner happens to have lots of
> rseq critical sections as well, causing it to repeatedly store to the
> rseq_abi->rseq_cs field, which is in the same cache line.
>
> But even then I'm wondering if this really matters, because each of those
> stores to rseq_cs would only slow down loads from other threads which will
> need to retry busy-wait anyway if the on-cpu flag is still set.
>
> So, what am I missing ? Is this heavy use of rseq critical sections while
> the lock is held the scenario you are concerned about ?
>
The case in my mind is the opposite direction: the loads from other
threads delay the stores to rseq_cs on the current thread, which I
assume are usually a fast path. For example:
CPU 1 CPU 2
lock(foo); // holding a lock
rseq_start():
<CPU 1 own the cache line exclusively>
lock(foo):
<fail to get foo>
<check whether the lock owner is on CPU>
<cache line becames shared>
->rseq_cs = .. // Need to invalidate the cache line on other CPU
But as you mentioned, there is only one updater here (the current
thread), so maybe it doesn't matter... but since it's a userspace ABI,
so I cannot help thinking "what if there is another bit that has a
different usage pattern introduced in the future", so..
> Note that the heavy cache-line bouncing in my test-case happens on the lock
> structure (cmpxchg expecting NULL, setting the current thread rseq_get_abi()
> pointer on success). There are probably better ways to implement that part,
> it is currently just a simple prototype showcasing the approach.
>
Yeah.. that's a little strange, I guess you can just read the lock
owner's rseq_abi, for example:
rseq_lock_slowpath() {
struct rseq_abi *other_rseq = lock->owner;
if (RSEQ_ACCESS_ONCE(other_rseq->sched_state)) {
...
}
}
?
Regards,
Boqun
> Thanks,
>
> Mathieu
>
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> https://www.efficios.com
>
Powered by blists - more mailing lists