[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170525151401.GP3956@linux.vnet.ibm.com>
Date: Thu, 25 May 2017 08:14:01 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Masami Hiramatsu <mhiramat@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Ananth N Mavinakayanahalli <ananth@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H . Peter Anvin" <hpa@...or.com>
Subject: Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for
optprobe wit CONFIG_PREEMPT
On Thu, May 25, 2017 at 08:15:55AM +0200, Ingo Molnar wrote:
>
> * Masami Hiramatsu <mhiramat@...nel.org> wrote:
>
> > --- a/kernel/kprobes.c
> > +++ b/kernel/kprobes.c
> > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p)
> > static bool kprobes_allow_optimization;
> >
> > /*
> > + * Synchronizing wait on trampline code for interrupted tasks/threads.
> > + * Since the threads running on dynamically allocated trampline code
> > + * can be interrupted, kprobes has to wait for those tasks back on
> > + * track and scheduled. If the kernel is preemptive, the thread can be
> > + * preempted by other tasks on the trampoline too. For such case, this
> > + * calls synchronize_rcu_tasks() to wait for those tasks back on track.
> > + */
> > +static void synchronize_on_trampoline(void)
> > +{
> > +#ifdef CONFIG_PREEMPT
> > + synchronize_rcu_tasks();
> > +#else
> > + synchronize_sched();
> > +#endif
> > +}
>
> So that's really unacceptably ugly.
>
> Paul, I still question the need to have tasks-RCU as a Kconfig distinction,
> _especially_ if its API usage results in such ugly secondary #ifdefs...
>
> Why isn't there a single synchronize_rcu_tasks() API function, which does what is
> expected, where the _RCU_ code figures out how to implement it?
>
> I.e.:
>
> - There should be no user configurable TASKS_RCU Kconfig setting - at most a
> helper Kconfig that is automatically selected by the RCU code itself.
>
> - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call.
>
> Thanks,
How about the following (untested) patch?
This is against -rcu's rcu/dev branch, FWIW.
And I am also queuing patches with other cleanups, including do_exit(),
enabled by this approach.
Thanx, Paul
------------------------------------------------------------------------
commit 9ab47fba45cea06e223e524d392621b64c174720
Author: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Date: Thu May 25 08:05:00 2017 -0700
rcu: Drive TASKS_RCU directly off of PREEMPT
The actual use of TASKS_RCU is only when PREEMPT, otherwise RCU-sched
is used instead. This commit therefore makes synchronize_rcu_tasks()
and call_rcu_tasks() available always, but mapped to synchronize_sched()
and call_rcu_sched(), respectively, when !PREEMPT. This approach also
allows some #ifdefs to be removed from rcutorture.
Reported-by: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
---
include/linux/rcupdate.h | 6 ++++--
kernel/rcu/Kconfig | 3 +--
kernel/rcu/rcutorture.c | 17 +----------------
3 files changed, 6 insertions(+), 20 deletions(-)
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index f816fc72b51e..c3f380befdd7 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -58,8 +58,6 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func);
void call_rcu_bh(struct rcu_head *head, rcu_callback_t func);
void call_rcu_sched(struct rcu_head *head, rcu_callback_t func);
void synchronize_sched(void);
-void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func);
-void synchronize_rcu_tasks(void);
void rcu_barrier_tasks(void);
#ifdef CONFIG_PREEMPT_RCU
@@ -176,10 +174,14 @@ extern struct srcu_struct tasks_rcu_exit_srcu;
rcu_all_qs(); \
rcu_note_voluntary_context_switch_lite(t); \
} while (0)
+void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func);
+void synchronize_rcu_tasks(void);
#else /* #ifdef CONFIG_TASKS_RCU */
#define TASKS_RCU(x) do { } while (0)
#define rcu_note_voluntary_context_switch_lite(t) do { } while (0)
#define rcu_note_voluntary_context_switch(t) rcu_all_qs()
+#define call_rcu_tasks call_rcu_sched
+#define synchronize_rcu_tasks synchronize_sched
#endif /* #else #ifdef CONFIG_TASKS_RCU */
/**
diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
index be90c945063f..9210379c0353 100644
--- a/kernel/rcu/Kconfig
+++ b/kernel/rcu/Kconfig
@@ -69,8 +69,7 @@ config TREE_SRCU
This option selects the full-fledged version of SRCU.
config TASKS_RCU
- bool
- default n
+ def_bool PREEMPT
select SRCU
help
This option enables a task-based RCU implementation that uses
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index aedc8f2ad955..273032dc8f2d 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -659,8 +659,6 @@ static struct rcu_torture_ops sched_ops = {
.name = "sched"
};
-#ifdef CONFIG_TASKS_RCU
-
/*
* Definitions for RCU-tasks torture testing.
*/
@@ -698,24 +696,11 @@ static struct rcu_torture_ops tasks_ops = {
.name = "tasks"
};
-#define RCUTORTURE_TASKS_OPS &tasks_ops,
-
static bool __maybe_unused torturing_tasks(void)
{
return cur_ops == &tasks_ops;
}
-#else /* #ifdef CONFIG_TASKS_RCU */
-
-#define RCUTORTURE_TASKS_OPS
-
-static bool __maybe_unused torturing_tasks(void)
-{
- return false;
-}
-
-#endif /* #else #ifdef CONFIG_TASKS_RCU */
-
/*
* RCU torture priority-boost testing. Runs one real-time thread per
* CPU for moderate bursts, repeatedly registering RCU callbacks and
@@ -1712,7 +1697,7 @@ rcu_torture_init(void)
int firsterr = 0;
static struct rcu_torture_ops *torture_ops[] = {
&rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops,
- &sched_ops, RCUTORTURE_TASKS_OPS
+ &sched_ops, &tasks_ops,
};
if (!torture_init_begin(torture_type, verbose, &torture_runnable))
Powered by blists - more mailing lists