[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191003140955.GA27003@lenoir>
Date: Thu, 3 Oct 2019 16:10:52 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: paulmck@...nel.org
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
mingo@...nel.org, jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org,
"Paul E. McKenney" <paulmck@...ux.ibm.com>
Subject: Re: [PATCH tip/core/rcu 03/12] rcu: Force on tick when invoking lots
of callbacks
On Wed, Oct 02, 2019 at 06:38:54PM -0700, paulmck@...nel.org wrote:
> From: "Paul E. McKenney" <paulmck@...ux.ibm.com>
>
> Callback invocation can run for a significant time period, and within
> CONFIG_NO_HZ_FULL=y kernels, this period will be devoid of scheduler-clock
> interrupts. In-kernel execution without such interrupts can cause all
> manner of malfunction, with RCU CPU stall warnings being but one result.
>
> This commit therefore forces scheduling-clock interrupts on whenever more
> than a few RCU callbacks are invoked. Because offloaded callback invocation
> can be preempted, this forcing is withdrawn on each context switch. This
> in turn requires that the loop invoking RCU callbacks reiterate the forcing
> periodically.
>
> [ paulmck: Apply Joel Fernandes TICK_DEP_MASK_RCU->TICK_DEP_BIT_RCU fix. ]
> Signed-off-by: Paul E. McKenney <paulmck@...ux.ibm.com>
> ---
> kernel/rcu/tree.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 8110514..db673ae 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2151,6 +2151,8 @@ static void rcu_do_batch(struct rcu_data *rdp)
> rcu_nocb_unlock_irqrestore(rdp, flags);
>
> /* Invoke callbacks. */
> + if (IS_ENABLED(CONFIG_NO_HZ_FULL))
No need for the IS_ENABLED(), the API takes care of that.
> + tick_dep_set_task(current, TICK_DEP_BIT_RCU);
> rhp = rcu_cblist_dequeue(&rcl);
> for (; rhp; rhp = rcu_cblist_dequeue(&rcl)) {
> debug_rcu_head_unqueue(rhp);
> @@ -2217,6 +2219,8 @@ static void rcu_do_batch(struct rcu_data *rdp)
> /* Re-invoke RCU core processing if there are callbacks remaining. */
> if (!offloaded && rcu_segcblist_ready_cbs(&rdp->cblist))
> invoke_rcu_core();
> + if (IS_ENABLED(CONFIG_NO_HZ_FULL))
Same here.
Thanks.
> + tick_dep_clear_task(current, TICK_DEP_BIT_RCU);
> }
>
> /*
> --
> 2.9.5
>
Powered by blists - more mailing lists