lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Sep 2020 03:25:27 +0000
From:   "Zhang, Qiang" <Qiang.Zhang@...driver.com>
To:     "paulmck@...nel.org" <paulmck@...nel.org>
CC:     Joel Fernandes <joel@...lfernandes.org>,
        Uladzislau Rezki <urezki@...il.com>,
        Josh Triplett <josh@...htriplett.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        rcu <rcu@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: 回复: RCU:  Question   rcu_preempt_blocked_readers_cgp  in  rcu_gp_fqs_loop func



________________________________________
发件人: Paul E. McKenney <paulmck@...nel.org>
发送时间: 2020年9月9日 19:22
收件人: Zhang, Qiang
抄送: Joel Fernandes; Uladzislau Rezki; Josh Triplett; Steven Rostedt; Mathieu Desnoyers; Lai Jiangshan; rcu; LKML
主题: Re: RCU:  Question   rcu_preempt_blocked_readers_cgp  in  rcu_gp_fqs_loop func

On Wed, Sep 09, 2020 at 07:03:39AM +0000, Zhang, Qiang wrote:
>
> When config preempt RCU,  and then  there are multiple levels  node,  the current task is preempted  in rcu  read critical region.
> the current task be add to "rnp->blkd_tasks" link list,  and the "rnp->gp_tasks"  may be assigned a value .  these rnp is leaf node in RCU tree.
>
> But in "rcu_gp_fqs_loop" func, we check blocked readers in root node.
>
> static void rcu_gp_fqs_loop(void)
>  {
>             .....
>             struct rcu_node *rnp = rcu_get_root();
>             .....
>             if (!READ_ONCE(rnp->qsmask) &&
>                                !rcu_preempt_blocked_readers_cgp(rnp))    ------> rnp is root node
>                      break;
>             ....
> }
>
> the root node's blkd_tasks never add task, the "rnp->gp_tasks" is never be assigned value,  this check is invailed.
>  Should we check leaf nodes like this

>There are two cases:

>1.      There is only a single rcu_node structure, which is both root
>       and leaf.  In this case, the current check is required:  Both
>       ->qsmask and the ->blkd_tasks list must be checked.  Your
>        rcu_preempt_blocked_readers() would work in this case, but
>        the current code is a bit faster because it does not need
>        to acquire the ->lock nor does it need the loop overhead.

>2.      There are multiple levels.  In this case, as you say, the root
>        rcu_node structure's ->blkd_tasks list will always be empty.
>        But also in this case, the root rcu_node structure's ->qsmask
>        cannot be zero until all the leaf rcu_node structures' ->qsmask
>        fields are zero and their ->blkd_tasks lists no longer have
>        tasks blocking the current grace period.  This means that your
 >       rcu_preempt_blocked_readers() function would never return
 >       true in this case.

>So the current code is fine.

>Are you seeing failures on mainline kernels?  If so, what is the failure
>mode?

 Yes it's right, thank you for your explanation.
  
  thanks
  Qiang

 >                                                       Thanx, Paul

> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1846,6 +1846,25 @@ static bool rcu_gp_init(void)
>       return true;
>  }
>
> +static bool rcu_preempt_blocked_readers(void)
> +{
> +     struct rcu_node *rnp;
> +     unsigned long flags;
> +     bool ret = false;
> +
> +     rcu_for_each_leaf_node(rnp) {
> +             raw_spin_lock_irqsave_rcu_node(rnp, flags);
> +             if (rcu_preempt_blocked_readers_cgp(rnp)) {
> +                     ret = true;
> +                     raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> +                     break;
> +             }
> +             raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> +     }
> +
> +     return ret;
> +}
> +
>  /*
>   * Helper function for swait_event_idle_exclusive() wakeup at force-quiescent-state
>   * time.
> @@ -1864,7 +1883,7 @@ static bool rcu_gp_fqs_check_wake(int *gfp)
>               return true;
>
>       // The current grace period has completed.
> -     if (!READ_ONCE(rnp->qsmask) && !rcu_preempt_blocked_readers_cgp(rnp))
> +     if (!READ_ONCE(rnp->qsmask) && !rcu_preempt_blocked_readers())
>               return true;
>
>       return false;
> @@ -1927,7 +1946,7 @@ static void rcu_gp_fqs_loop(void)
>               /* Locking provides needed memory barriers. */
>               /* If grace period done, leave loop. */
>               if (!READ_ONCE(rnp->qsmask) &&
> -                 !rcu_preempt_blocked_readers_cgp(rnp))
> +                 !rcu_preempt_blocked_readers())
>                       break;
>               /* If time for quiescent-state forcing, do it. */
>               if (!time_after(rcu_state.jiffies_force_qs, jiffies) ||
> --
>
>
> thanks
> Qiang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ