[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181113100836.567dcbdafa13546928b62d7a@kernel.org>
Date: Tue, 13 Nov 2018 10:08:36 -0800
From: Masami Hiramatsu <mhiramat@...nel.org>
To: paulmck@...ux.ibm.com
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org,
"Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>,
Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH tip/core/rcu 20/41] kprobes: eplace synchronize_sched()
with synchronize_rcu()
On Sun, 11 Nov 2018 19:19:16 -0800
"Paul E. McKenney" <paulmck@...ux.ibm.com> wrote:
> On Mon, Nov 12, 2018 at 12:00:48PM +0900, Masami Hiramatsu wrote:
> > On Sun, 11 Nov 2018 11:43:49 -0800
> > "Paul E. McKenney" <paulmck@...ux.ibm.com> wrote:
> >
> > > Now that synchronize_rcu() waits for preempt-disable regions of code
> > > as well as RCU read-side critical sections, synchronize_sched() can be
> > > replaced by synchronize_rcu(). This commit therefore makes this change.
> >
> > Would you mean synchronize_rcu() can ensure that any interrupt handler
> > (which should run under preempt-disable state) run out (even on non-preemptive
> > kernel)?
>
> Yes, but only as of this merge window. See this commit:
>
> 3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")
OK, I also found that now those are same.
45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")
Acked-by: Masami Hiramatsu <mhiramat@...nel.org>
Thank you!
>
> Don't try this in v4.19 or earlier, but v4.20 and later is OK. ;-)
>
> Thanx, Paul
>
> > If so, I agree with these changes.
> >
> > Thank you,
> >
> > >
> > > Signed-off-by: Paul E. McKenney <paulmck@...ux.ibm.com>
> > > Cc: "Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>
> > > Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>
> > > Cc: "David S. Miller" <davem@...emloft.net>
> > > Cc: Masami Hiramatsu <mhiramat@...nel.org>
> > > ---
> > > kernel/kprobes.c | 10 +++++-----
> > > 1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > > index 90e98e233647..08e31d863191 100644
> > > --- a/kernel/kprobes.c
> > > +++ b/kernel/kprobes.c
> > > @@ -229,7 +229,7 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c)
> > > struct kprobe_insn_page *kip, *next;
> > >
> > > /* Ensure no-one is interrupted on the garbages */
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > >
> > > list_for_each_entry_safe(kip, next, &c->pages, list) {
> > > int i;
> > > @@ -1382,7 +1382,7 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> > > if (ret) {
> > > ap->flags |= KPROBE_FLAG_DISABLED;
> > > list_del_rcu(&p->list);
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > }
> > > }
> > > }
> > > @@ -1597,7 +1597,7 @@ int register_kprobe(struct kprobe *p)
> > > ret = arm_kprobe(p);
> > > if (ret) {
> > > hlist_del_rcu(&p->hlist);
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > goto out;
> > > }
> > > }
> > > @@ -1776,7 +1776,7 @@ void unregister_kprobes(struct kprobe **kps, int num)
> > > kps[i]->addr = NULL;
> > > mutex_unlock(&kprobe_mutex);
> > >
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > for (i = 0; i < num; i++)
> > > if (kps[i]->addr)
> > > __unregister_kprobe_bottom(kps[i]);
> > > @@ -1966,7 +1966,7 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
> > > rps[i]->kp.addr = NULL;
> > > mutex_unlock(&kprobe_mutex);
> > >
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > for (i = 0; i < num; i++) {
> > > if (rps[i]->kp.addr) {
> > > __unregister_kprobe_bottom(&rps[i]->kp);
> > > --
> > > 2.17.1
> > >
> >
> >
> > --
> > Masami Hiramatsu <mhiramat@...nel.org>
> >
>
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists