lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 20 Nov 2020 06:19:03 +0000
From:   "Zhang, Qiang" <Qiang.Zhang@...driver.com>
To:     "paulmck@...nel.org" <paulmck@...nel.org>
CC:     "jiangshanlai@...il.com" <jiangshanlai@...il.com>,
        "rostedt@...dmis.org" <rostedt@...dmis.org>,
        "josh@...htriplett.org" <josh@...htriplett.org>,
        "rcu@...r.kernel.org" <rcu@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: 回复: [PATCH] srcu: Remove srcu_cblist_invoking member from sdp



________________________________________
发件人: Paul E. McKenney <paulmck@...nel.org>
发送时间: 2020年11月20日 2:12
收件人: Zhang, Qiang
抄送: jiangshanlai@...il.com; rostedt@...dmis.org; josh@...htriplett.org; rcu@...r.kernel.org; linux-kernel@...r.kernel.org
主题: Re: [PATCH] srcu: Remove srcu_cblist_invoking member from sdp

[Please note this e-mail is from an EXTERNAL e-mail address]

On Thu, Nov 19, 2020 at 01:34:11PM +0800, qiang.zhang@...driver.com wrote:
> From: Zqiang <qiang.zhang@...driver.com>
>
> Workqueue can ensure the multiple same sdp->work sequential
> execution in rcu_gp_wq, not need srcu_cblist_invoking to
> prevent concurrent execution, so remove it.
>
> Signed-off-by: Zqiang <qiang.zhang@...driver.com>

>Good job analyzing the code, which is very good to see!!!
>
>But these do have a potential purpose.  Right now, it is OK to invoke
>synchronize_srcu() during early boot, that is, before the scheduler
>has started.  But there is a gap from the time that the scheduler has
>initialized (so that preemption and blocking are possible) and the time
>that workqueues are initialized and fully functional.  Only after that
>is it once again OK to use synchronize_srcu().
>
>If synchronize_srcu() is ever required to work correctly during that
>time period, it will need to directly invoke the functions that are
>currently run in workqueue context.  Which means that there will then be
>the possibility of two instances of these functions running just after
>workqueues are available.
>
>                                                       Thanx, Paul

Thanks Paul.

> ---
>  include/linux/srcutree.h | 1 -
>  kernel/rcu/srcutree.c    | 8 ++------
>  2 files changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
> index 9cfcc8a756ae..62d8312b5451 100644
> --- a/include/linux/srcutree.h
> +++ b/include/linux/srcutree.h
> @@ -31,7 +31,6 @@ struct srcu_data {
>       struct rcu_segcblist srcu_cblist;       /* List of callbacks.*/
>       unsigned long srcu_gp_seq_needed;       /* Furthest future GP needed. */
>       unsigned long srcu_gp_seq_needed_exp;   /* Furthest future exp GP. */
> -     bool srcu_cblist_invoking;              /* Invoking these CBs? */
>       struct timer_list delay_work;           /* Delay for CB invoking */
>       struct work_struct work;                /* Context for CB invoking. */
>       struct rcu_head srcu_barrier_head;      /* For srcu_barrier() use. */
> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> index 3c5e2806e0b9..c4d5cd2567a6 100644
> --- a/kernel/rcu/srcutree.c
> +++ b/kernel/rcu/srcutree.c
> @@ -134,7 +134,6 @@ static void init_srcu_struct_nodes(struct srcu_struct *ssp, bool is_static)
>               sdp = per_cpu_ptr(ssp->sda, cpu);
>               spin_lock_init(&ACCESS_PRIVATE(sdp, lock));
>               rcu_segcblist_init(&sdp->srcu_cblist);
> -             sdp->srcu_cblist_invoking = false;
>               sdp->srcu_gp_seq_needed = ssp->srcu_gp_seq;
>               sdp->srcu_gp_seq_needed_exp = ssp->srcu_gp_seq;
>               sdp->mynode = &snp_first[cpu / levelspread[level]];
> @@ -1254,14 +1253,11 @@ static void srcu_invoke_callbacks(struct work_struct *work)
>       spin_lock_irq_rcu_node(sdp);
>       rcu_segcblist_advance(&sdp->srcu_cblist,
>                             rcu_seq_current(&ssp->srcu_gp_seq));
> -     if (sdp->srcu_cblist_invoking ||
> -         !rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) {
> +     if (!rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) {
>               spin_unlock_irq_rcu_node(sdp);
>               return;  /* Someone else on the job or nothing to do. */
>       }
>
> -     /* We are on the job!  Extract and invoke ready callbacks. */
> -     sdp->srcu_cblist_invoking = true;
>       rcu_segcblist_extract_done_cbs(&sdp->srcu_cblist, &ready_cbs);
>       len = ready_cbs.len;
>       spin_unlock_irq_rcu_node(sdp);
> @@ -1282,7 +1278,7 @@ static void srcu_invoke_callbacks(struct work_struct *work)
>       rcu_segcblist_add_len(&sdp->srcu_cblist, -len);
>       (void)rcu_segcblist_accelerate(&sdp->srcu_cblist,
>                                      rcu_seq_snap(&ssp->srcu_gp_seq));
> -     sdp->srcu_cblist_invoking = false;
> +
>       more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist);
>       spin_unlock_irq_rcu_node(sdp);
>       if (more)
> --
> 2.17.1
>

Powered by blists - more mailing lists