[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20181101231617.GA10882@linux.ibm.com>
Date: Thu, 1 Nov 2018 16:16:17 -0700
From: "Paul E. McKenney" <paulmck@...ux.ibm.com>
To: tglx@...utronix.de, bigeasy@...utronix.de
Cc: linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: rcu: Make ksoftirqd do RCU quiescent states
> Implementing RCU-bh in terms of RCU-preempt makes the system vulnerable
> to network-based denial-of-service attacks. This patch therefore
> makes __do_softirq() invoke rcu_bh_qs(), but only when __do_softirq()
> is running in ksoftirqd context. A wrapper layer in interposed so that
> other calls to __do_softirq() avoid invoking rcu_bh_qs(). The underlying
> function __do_softirq_common() does the actual work.
>
> The reason that rcu_bh_qs() is bad in these non-ksoftirqd contexts is
> that there might be a local_bh_enable() inside an RCU-preempt read-side
> critical section. This local_bh_enable() can invoke __do_softirq()
> directly, so if __do_softirq() were to invoke rcu_bh_qs() (which just
> calls rcu_preempt_qs() in the PREEMPT_RT_FULL case), there would be
> an illegal RCU-preempt quiescent state in the middle of an RCU-preempt
> read-side critical section. Therefore, quiescent states can only happen
> in cases where __do_softirq() is invoked directly from ksoftirqd.
I -think- that the need for this goes away in the current merge window
because RCU-bh is going away. There might still be an rt-specific need
to disable irqs, though.
Thanx, Paul
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> Link: http://lkml.kernel.org/r/20111005184518.GA21601@linux.vnet.ibm.com
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 197088cdb56e..968579b86401 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -244,7 +244,19 @@ void rcu_sched_qs(void)
> this_cpu_ptr(&rcu_sched_data), true);
> }
>
> -#ifndef CONFIG_PREEMPT_RT_FULL
> +#ifdef CONFIG_PREEMPT_RT_FULL
> +static void rcu_preempt_qs(void);
> +
> +void rcu_bh_qs(void)
> +{
> + unsigned long flags;
> +
> + /* Callers to this function, rcu_preempt_qs(), must disable irqs. */
> + local_irq_save(flags);
> + rcu_preempt_qs();
> + local_irq_restore(flags);
> +}
> +#else
> void rcu_bh_qs(void)
> {
> RCU_LOCKDEP_WARN(preemptible(), "rcu_bh_qs() invoked with preemption enabled!!!");
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index 429a2f144e19..bee9bffeb0ce 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -29,6 +29,7 @@
> #include <linux/oom.h>
> #include <linux/sched/debug.h>
> #include <linux/smpboot.h>
> +#include <linux/jiffies.h>
> #include <linux/sched/isolation.h>
> #include <uapi/linux/sched/types.h>
> #include "../time/tick-internal.h"
> @@ -1407,7 +1408,7 @@ static void rcu_prepare_kthreads(int cpu)
>
> #endif /* #else #ifdef CONFIG_RCU_BOOST */
>
> -#if !defined(CONFIG_RCU_FAST_NO_HZ)
> +#if !defined(CONFIG_RCU_FAST_NO_HZ) || defined(CONFIG_PREEMPT_RT_FULL)
>
> /*
> * Check to see if any future RCU-related work will need to be done
> @@ -1423,7 +1424,9 @@ int rcu_needs_cpu(u64 basemono, u64 *nextevt)
> *nextevt = KTIME_MAX;
> return rcu_cpu_has_callbacks(NULL);
> }
> +#endif /* !defined(CONFIG_RCU_FAST_NO_HZ) || defined(CONFIG_PREEMPT_RT_FULL) */
>
> +#if !defined(CONFIG_RCU_FAST_NO_HZ)
> /*
> * Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up
> * after it.
> @@ -1520,6 +1523,8 @@ static bool __maybe_unused rcu_try_advance_all_cbs(void)
> return cbs_ready;
> }
>
> +#ifndef CONFIG_PREEMPT_RT_FULL
> +
> /*
> * Allow the CPU to enter dyntick-idle mode unless it has callbacks ready
> * to invoke. If the CPU has callbacks, try to advance them. Tell the
> @@ -1562,6 +1567,7 @@ int rcu_needs_cpu(u64 basemono, u64 *nextevt)
> *nextevt = basemono + dj * TICK_NSEC;
> return 0;
> }
> +#endif /* #ifndef CONFIG_PREEMPT_RT_FULL */
>
> /*
> * Prepare a CPU for idle from an RCU perspective. The first major task
Powered by blists - more mailing lists