lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Jul 2022 18:51:45 -0700
From:   Boqun Feng <boqun.feng@...il.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
        kernel-team@...com, rostedt@...dmis.org,
        Brian Foster <bfoster@...hat.com>,
        Dave Chinner <david@...morbit.com>,
        Al Viro <viro@...iv.linux.org.uk>, Ian Kent <raven@...maw.net>
Subject: Re: [PATCH rcu 04/12] rcu: Switch polled grace-period APIs to
 ->gp_seq_polled

On Wed, Jul 20, 2022 at 06:04:55PM -0700, Paul E. McKenney wrote:
[...]
> > > @@ -3860,7 +3944,7 @@ unsigned long get_state_synchronize_rcu(void)
> > >  	 * before the load from ->gp_seq.
> > >  	 */
> > >  	smp_mb();  /* ^^^ */
> > > -	return rcu_seq_snap(&rcu_state.gp_seq);
> > > +	return rcu_seq_snap(&rcu_state.gp_seq_polled);
> > 
> > I happened to run into this. There is one usage of
> > get_state_synchronize_rcu() in start_poll_synchronize_rcu(), in which
> > the return value of get_state_synchronize_rcu() ("gp_seq") will be used
> > for rcu_start_this_gp(). I don't think this is quite right, because
> > after this change, rcu_state.gp_seq and rcu_state.gp_seq_polled are
> > different values, in fact ->gp_seq_polled is greater than ->gp_seq
> > by how many synchronize_rcu() is called in early boot.
> > 
> > Am I missing something here?
> 
> It does not appear that your are missing anything, sad to say!
> 
> Does the following make it work better?
> 
> 							Thanx, Paul
> 
> ------------------------------------------------------------------------
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 2122359f0c862..cf2fd58a93a41 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -3571,7 +3571,7 @@ EXPORT_SYMBOL_GPL(get_state_synchronize_rcu);
>  unsigned long start_poll_synchronize_rcu(void)
>  {
>  	unsigned long flags;
> -	unsigned long gp_seq = get_state_synchronize_rcu();
> +	unsigned long gp_seq = rcu_seq_snap(&rcu_state.gp_seq);

get_state_synchronize_rcu() is still needed, because we need to return
a cookie for polling for this function. Something like below maybe? Hope
I didn't mess up the ordering ;-)

Regards,
Boqun

---------------
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 84d281776688..0f9134871289 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3571,11 +3583,39 @@ EXPORT_SYMBOL_GPL(get_state_synchronize_rcu);
 unsigned long start_poll_synchronize_rcu(void)
 {
        unsigned long flags;
-       unsigned long gp_seq = get_state_synchronize_rcu();
+       unsigned long gp_seq_poll = get_state_synchronize_rcu();
+       unsigned long gp_seq;
        bool needwake;
        struct rcu_data *rdp;
        struct rcu_node *rnp;

+       /*
+        * Need to start a gp if no gp has been started yet.
+        *
+        * Note that we need to snapshot gp_seq after gp_seq_poll, otherwise
+        * consider the follow case:
+        *
+        *      <no gp in progress>     // gp# is 0
+        *      snapshot gp_seq         // gp #2 will be set as needed
+        *      <a gp passed>
+        *                              // gp# is 1
+        *      snapshot gp_seq_poll    // polling gets ready until gp #3
+        *
+        * then the following rcu_start_this_gp() won't mark gp #3 as needed,
+        * and polling won't become ready if others don't start a gp.
+        *
+        * And the following case is fine:
+        *
+        *      <no gp in progress>     // gp# is 0
+        *      snapshot gp_seq_poll    // polling gets ready until gp #2
+        *      <a gp passed>
+        *                              // gp# is 1
+        *      snapshot gp_seq         // gp #3 will be set as needed
+        *
+        * Also note, we rely on the smp_mb() in get_state_synchronize_rcu()
+        * to order the two snapshots.
+        */
+       gp_seq = rcu_seq_snap(&rcu_state.gp_seq);
        lockdep_assert_irqs_enabled();
        local_irq_save(flags);
        rdp = this_cpu_ptr(&rcu_data);
@@ -3585,7 +3625,7 @@ unsigned long start_poll_synchronize_rcu(void)
        raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
        if (needwake)
                rcu_gp_kthread_wake();
-       return gp_seq;
+       return gp_seq_poll;
 }
 EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ