[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <06b15cfa-620f-d6b1-61d1-8ddfba74a2c8@linux.alibaba.com>
Date: Fri, 1 Nov 2019 23:32:32 +0800
From: Lai Jiangshan <laijs@...ux.alibaba.com>
To: paulmck@...nel.org, Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Josh Triplett <josh@...htriplett.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Joel Fernandes <joel@...lfernandes.org>,
Andy Lutomirski <luto@...nel.org>,
Fenghua Yu <fenghua.yu@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
Kees Cook <keescook@...omium.org>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Dave Hansen <dave.hansen@...el.com>,
Babu Moger <Babu.Moger@....com>,
Rik van Riel <riel@...riel.com>,
"Chang S. Bae" <chang.seok.bae@...el.com>,
Jann Horn <jannh@...gle.com>,
David Windsor <dwindsor@...il.com>,
Elena Reshetova <elena.reshetova@...el.com>,
Andrea Parri <andrea.parri@...rulasolutions.com>,
Yuyang Du <duyuyang@...il.com>,
Richard Guy Briggs <rgb@...hat.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christian Brauner <christian.brauner@...ntu.com>,
Michal Hocko <mhocko@...e.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Al Viro <viro@...iv.linux.org.uk>,
"Dmitry V. Levin" <ldv@...linux.org>, rcu@...r.kernel.org
Subject: Re: [PATCH 11/11] x86,rcu: use percpu rcu_preempt_depth
On 2019/11/1 10:30 下午, Paul E. McKenney wrote:
> On Fri, Nov 01, 2019 at 02:13:15PM +0100, Peter Zijlstra wrote:
>> On Fri, Nov 01, 2019 at 05:58:16AM -0700, Paul E. McKenney wrote:
>>> On Thu, Oct 31, 2019 at 10:08:06AM +0000, Lai Jiangshan wrote:
>>>> +/* We mask the RCU_NEED_SPECIAL bit so that it return real depth */
>>>> +static __always_inline int rcu_preempt_depth(void)
>>>> +{
>>>> + return raw_cpu_read_4(__rcu_preempt_depth) & ~RCU_NEED_SPECIAL;
>>>
>>> Why not raw_cpu_generic_read()?
>>>
>>> OK, OK, I get that raw_cpu_read_4() translates directly into an "mov"
>>> instruction on x86, but given that x86 percpu_from_op() is able to
>>> adjust based on operand size, why doesn't something like raw_cpu_read()
>>> also have an x86-specific definition that adjusts based on operand size?
>>
>> The reason for preempt.h was header recursion hell.
>
> Fair enough, being as that is also the reason for _rcu_read_lock()
> not being inlined. :-/
>
>>>> +}
>>>> +
>>>> +static __always_inline void rcu_preempt_depth_set(int pc)
>>>> +{
>>>> + int old, new;
>>>> +
>>>> + do {
>>>> + old = raw_cpu_read_4(__rcu_preempt_depth);
>>>> + new = (old & RCU_NEED_SPECIAL) |
>>>> + (pc & ~RCU_NEED_SPECIAL);
>>>> + } while (raw_cpu_cmpxchg_4(__rcu_preempt_depth, old, new) != old);
>>>
>>> Ummm...
>>>
>>> OK, as you know, I have long wanted _rcu_read_lock() to be inlineable.
>>> But are you -sure- that an x86 cmpxchg is faster than a function call
>>> and return? I have strong doubts on that score.
>>
>> This is a regular CMPXCHG instruction, not a LOCK prefixed one, and that
>> should make all the difference
>
> Yes, understood, but this is also adding some arithmetic, a comparison,
> and a conditional branch. Are you -sure- that this is cheaper than
> an unconditional call and return?
rcu_preempt_depth_set() is used only for exit_rcu().
The performance doesn't matter here. And since RCU_NEED_SPECIAL
bit is allowed to lost in exit_rcu(), rcu_preempt_depth_set()
can be a single raw_cpu_write_4() if the performance is matter.
(This complex code is copied from preempt.h and I can't expect
how will rcu_preempt_depth_set() be used in the feture
so I keep it unchanged.)
+static __always_inline void rcu_preempt_depth_inc(void)
+{
+ raw_cpu_add_4(__rcu_preempt_depth, 1);
+}
This one is for read_read_lock(). ONE instruction.
+
+static __always_inline bool rcu_preempt_depth_dec_and_test(void)
+{
+ return GEN_UNARY_RMWcc("decl", __rcu_preempt_depth, e,
__percpu_arg([var]));
+}
This one is for read_read_unlock() which will be 2 instructions
("decl" and "je"), which is the same as preempt_enable().
In news days, preempt_disable() is discouraged unless it is
really necessary and rcu is always encouraged. Low overhead
read_read_[un]lock() is essential.
>
>>> Plus multiplying the x86-specific code by 26 doesn't look good.
>>>
>>> And the RCU read-side nesting depth really is a per-task thing. Copying
>>> it to and from the task at context-switch time might make sense if we
>>> had a serious optimization, but it does not appear that we do.
Once upon a time, __preempt_count is also being copied to and from the
task at context-switch, and worked well.
>>>
>>> You original patch some years back, ill-received though it was at the
>>> time, is looking rather good by comparison. Plus it did not require
>>> architecture-specific code!
>>
>> Right, so the per-cpu preempt_count code relies on the preempt_count
>> being invariant over context switches. That means we never have to
>> save/restore the thing.
>>
>> For (preemptible) rcu, this is 'obviously' not the case.
>>
>> That said, I've not looked over this patch series, I only got 1 actual
>> patch, not the whole series, and I've not had time to go dig out the
>> rest..
>
> I have taken a couple of the earlier patches in the series.
>
> Perhaps inlining these things is instead a job for the long anticipated
> GCC LTO? ;-)
Adding a kenerl/offset.c and some Mafefile stuff will help inlining
these things. But I don't think Linus will happy with introducing
kenerl/offset.c. There will be 3 instructions for rcu_read_lock()
and 5 for rcu_read_unlock(), which doesn't taste so delicious.
Moving rcu_read_lock_nesting to struct thread_info is another
possible way. The number of instructions is also 3 and 5.
Thanks
Lai
Powered by blists - more mailing lists