[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201119181235.GX1437@paulmck-ThinkPad-P72>
Date: Thu, 19 Nov 2020 10:12:35 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: qiang.zhang@...driver.com
Cc: jiangshanlai@...il.com, rostedt@...dmis.org, josh@...htriplett.org,
rcu@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] srcu: Remove srcu_cblist_invoking member from sdp
On Thu, Nov 19, 2020 at 01:34:11PM +0800, qiang.zhang@...driver.com wrote:
> From: Zqiang <qiang.zhang@...driver.com>
>
> Workqueue can ensure the multiple same sdp->work sequential
> execution in rcu_gp_wq, not need srcu_cblist_invoking to
> prevent concurrent execution, so remove it.
>
> Signed-off-by: Zqiang <qiang.zhang@...driver.com>
Good job analyzing the code, which is very good to see!!!
But these do have a potential purpose. Right now, it is OK to invoke
synchronize_srcu() during early boot, that is, before the scheduler
has started. But there is a gap from the time that the scheduler has
initialized (so that preemption and blocking are possible) and the time
that workqueues are initialized and fully functional. Only after that
is it once again OK to use synchronize_srcu().
If synchronize_srcu() is ever required to work correctly during that
time period, it will need to directly invoke the functions that are
currently run in workqueue context. Which means that there will then be
the possibility of two instances of these functions running just after
workqueues are available.
Thanx, Paul
> ---
> include/linux/srcutree.h | 1 -
> kernel/rcu/srcutree.c | 8 ++------
> 2 files changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
> index 9cfcc8a756ae..62d8312b5451 100644
> --- a/include/linux/srcutree.h
> +++ b/include/linux/srcutree.h
> @@ -31,7 +31,6 @@ struct srcu_data {
> struct rcu_segcblist srcu_cblist; /* List of callbacks.*/
> unsigned long srcu_gp_seq_needed; /* Furthest future GP needed. */
> unsigned long srcu_gp_seq_needed_exp; /* Furthest future exp GP. */
> - bool srcu_cblist_invoking; /* Invoking these CBs? */
> struct timer_list delay_work; /* Delay for CB invoking */
> struct work_struct work; /* Context for CB invoking. */
> struct rcu_head srcu_barrier_head; /* For srcu_barrier() use. */
> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> index 3c5e2806e0b9..c4d5cd2567a6 100644
> --- a/kernel/rcu/srcutree.c
> +++ b/kernel/rcu/srcutree.c
> @@ -134,7 +134,6 @@ static void init_srcu_struct_nodes(struct srcu_struct *ssp, bool is_static)
> sdp = per_cpu_ptr(ssp->sda, cpu);
> spin_lock_init(&ACCESS_PRIVATE(sdp, lock));
> rcu_segcblist_init(&sdp->srcu_cblist);
> - sdp->srcu_cblist_invoking = false;
> sdp->srcu_gp_seq_needed = ssp->srcu_gp_seq;
> sdp->srcu_gp_seq_needed_exp = ssp->srcu_gp_seq;
> sdp->mynode = &snp_first[cpu / levelspread[level]];
> @@ -1254,14 +1253,11 @@ static void srcu_invoke_callbacks(struct work_struct *work)
> spin_lock_irq_rcu_node(sdp);
> rcu_segcblist_advance(&sdp->srcu_cblist,
> rcu_seq_current(&ssp->srcu_gp_seq));
> - if (sdp->srcu_cblist_invoking ||
> - !rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) {
> + if (!rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) {
> spin_unlock_irq_rcu_node(sdp);
> return; /* Someone else on the job or nothing to do. */
> }
>
> - /* We are on the job! Extract and invoke ready callbacks. */
> - sdp->srcu_cblist_invoking = true;
> rcu_segcblist_extract_done_cbs(&sdp->srcu_cblist, &ready_cbs);
> len = ready_cbs.len;
> spin_unlock_irq_rcu_node(sdp);
> @@ -1282,7 +1278,7 @@ static void srcu_invoke_callbacks(struct work_struct *work)
> rcu_segcblist_add_len(&sdp->srcu_cblist, -len);
> (void)rcu_segcblist_accelerate(&sdp->srcu_cblist,
> rcu_seq_snap(&ssp->srcu_gp_seq));
> - sdp->srcu_cblist_invoking = false;
> +
> more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist);
> spin_unlock_irq_rcu_node(sdp);
> if (more)
> --
> 2.17.1
>
Powered by blists - more mailing lists