[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4605b4f4-8a2b-4653-b684-9c696c36ebd0@paulmck-laptop>
Date: Tue, 21 Nov 2023 14:26:33 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ankur Arora <ankur.a.arora@...cle.com>,
linux-kernel@...r.kernel.org, tglx@...utronix.de,
torvalds@...ux-foundation.org, linux-mm@...ck.org, x86@...nel.org,
akpm@...ux-foundation.org, luto@...nel.org, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, mingo@...hat.com,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
willy@...radead.org, mgorman@...e.de, jon.grimm@....com,
bharata@....com, raghavendra.kt@....com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com,
jgross@...e.com, andrew.cooper3@...rix.com, mingo@...nel.org,
bristot@...nel.org, mathieu.desnoyers@...icios.com,
geert@...ux-m68k.org, glaubitz@...sik.fu-berlin.de,
anton.ivanov@...bridgegreys.com, mattst88@...il.com,
krypton@...ich-teichert.org, David.Laight@...lab.com,
richard@....at, mjguzik@...il.com
Subject: Re: [RFC PATCH 48/86] rcu: handle quiescent states for PREEMPT_RCU=n
On Tue, Nov 21, 2023 at 04:38:34PM -0500, Steven Rostedt wrote:
> On Tue, 21 Nov 2023 13:14:16 -0800
> "Paul E. McKenney" <paulmck@...nel.org> wrote:
>
> > On Tue, Nov 21, 2023 at 09:30:49PM +0100, Peter Zijlstra wrote:
> > > On Tue, Nov 21, 2023 at 11:25:18AM -0800, Paul E. McKenney wrote:
> > > > #define preempt_enable() \
> > > > do { \
> > > > barrier(); \
> > > > if (!IS_ENABLED(CONFIG_PREEMPT_RCU) && raw_cpu_read(rcu_data.rcu_urgent_qs) && \
> > > > (preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK | HARDIRQ_MASK | NMI_MASK) == PREEMPT_OFFSET) &&
> > > > !irqs_disabled()) \
>
> Could we make the above an else case of the below if ?
Wouldn't that cause the above preempt_count() test to always fail?
Another approach is to bury the test in preempt_count_dec_and_test(),
but I suspect that this would not make Peter any more happy than my
earlier suggestion. ;-)
> > > > rcu_all_qs(); \
> > > > if (unlikely(preempt_count_dec_and_test())) { \
> > > > __preempt_schedule(); \
> > > > } \
> > > > } while (0)
> > >
> > > Aaaaahhh, please no. We spend so much time reducing preempt_enable() to
> > > the minimal thing it is today, this will make it blow up into something
> > > giant again.
>
> Note, the above is only true with "CONFIG_PREEMPT_RCU is not set", which
> keeps the preempt_count() for preemptable kernels with PREEMPT_RCU still minimal.
Agreed, and there is probably some workload that does not like this.
After all, current CONFIG_PREEMPT_DYNAMIC=y booted with preempt=none
would have those cond_resched() invocations. I was leary of checking
dynamic information, but maybe sched_feat() is faster than I am thinking?
(It should be with the static_branch, but not sure about the other two
access modes.)
Thanx, Paul
Powered by blists - more mailing lists