[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181112120048.12b3fb7549f4b292fea1eefd@kernel.org>
Date: Mon, 12 Nov 2018 12:00:48 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: "Paul E. McKenney" <paulmck@...ux.ibm.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org,
"Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>,
Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>,
"David S. Miller" <davem@...emloft.net>,
Masami Hiramatsu <mhiramat@...nel.org>
Subject: Re: [PATCH tip/core/rcu 20/41] kprobes: eplace synchronize_sched()
with synchronize_rcu()
On Sun, 11 Nov 2018 11:43:49 -0800
"Paul E. McKenney" <paulmck@...ux.ibm.com> wrote:
> Now that synchronize_rcu() waits for preempt-disable regions of code
> as well as RCU read-side critical sections, synchronize_sched() can be
> replaced by synchronize_rcu(). This commit therefore makes this change.
Would you mean synchronize_rcu() can ensure that any interrupt handler
(which should run under preempt-disable state) run out (even on non-preemptive
kernel)?
If so, I agree with these changes.
Thank you,
>
> Signed-off-by: Paul E. McKenney <paulmck@...ux.ibm.com>
> Cc: "Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>
> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>
> Cc: "David S. Miller" <davem@...emloft.net>
> Cc: Masami Hiramatsu <mhiramat@...nel.org>
> ---
> kernel/kprobes.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 90e98e233647..08e31d863191 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -229,7 +229,7 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c)
> struct kprobe_insn_page *kip, *next;
>
> /* Ensure no-one is interrupted on the garbages */
> - synchronize_sched();
> + synchronize_rcu();
>
> list_for_each_entry_safe(kip, next, &c->pages, list) {
> int i;
> @@ -1382,7 +1382,7 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> if (ret) {
> ap->flags |= KPROBE_FLAG_DISABLED;
> list_del_rcu(&p->list);
> - synchronize_sched();
> + synchronize_rcu();
> }
> }
> }
> @@ -1597,7 +1597,7 @@ int register_kprobe(struct kprobe *p)
> ret = arm_kprobe(p);
> if (ret) {
> hlist_del_rcu(&p->hlist);
> - synchronize_sched();
> + synchronize_rcu();
> goto out;
> }
> }
> @@ -1776,7 +1776,7 @@ void unregister_kprobes(struct kprobe **kps, int num)
> kps[i]->addr = NULL;
> mutex_unlock(&kprobe_mutex);
>
> - synchronize_sched();
> + synchronize_rcu();
> for (i = 0; i < num; i++)
> if (kps[i]->addr)
> __unregister_kprobe_bottom(kps[i]);
> @@ -1966,7 +1966,7 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
> rps[i]->kp.addr = NULL;
> mutex_unlock(&kprobe_mutex);
>
> - synchronize_sched();
> + synchronize_rcu();
> for (i = 0; i < num; i++) {
> if (rps[i]->kp.addr) {
> __unregister_kprobe_bottom(&rps[i]->kp);
> --
> 2.17.1
>
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists