[an error occurred while processing this directive]
lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
[an error occurred while processing this directive]
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZR0vjdp+BNiFm46+@lothringen>
Date:   Wed, 4 Oct 2023 11:25:33 +0200
From:   Frederic Weisbecker <frederic@...nel.org>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Yong He <zhuangel570@...il.com>,
        Neeraj upadhyay <neeraj.iitr10@...il.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        Boqun Feng <boqun.feng@...il.com>,
        Uladzislau Rezki <urezki@...il.com>, RCU <rcu@...r.kernel.org>
Subject: Re: [PATCH 0/5] srcu fixes

On Tue, Oct 03, 2023 at 05:35:31PM -0700, Paul E. McKenney wrote:
> On Wed, Oct 04, 2023 at 01:28:58AM +0200, Frederic Weisbecker wrote:
> > Hi,
> > 
> > This contains a fix for "SRCU: kworker hung in synchronize_srcu":
> > 
> > 	http://lore.kernel.org/CANZk6aR+CqZaqmMWrC2eRRPY12qAZnDZLwLnHZbNi=xXMB401g@mail.gmail.com
> > 
> > And a few cleanups.
> > 
> > Passed 50 hours of SRCU-P and SRCU-N.
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
> > 	srcu/fixes
> > 
> > HEAD: 7ea5adc5673b42ef06e811dca75e43d558cc87e0
> > 
> > Thanks,
> > 	Frederic
> 
> Very good, and a big "Thank You!!!" to all of you!
> 
> I queued this series for testing purposes, and have started a bunch of
> SRCU-P and SRCU-N tests on one set of systems, and a single SRCU-P and
> SRCU-N on another system, but with both scenarios resized to 40 CPU each.
> 
> While that is in flight, a few questions:
> 
> o	Please check the Co-developed-by rules.  Last I knew, it was
> 	necessary to have a Signed-off-by after each Co-developed-by.

Indeed! I'll try to collect the three of them within a few days. If some
are missing, I'll put a Reported-by instead.

> 
> o	Is it possible to get a Tested-by from the original reporter?
> 	Or is this not reproducible?

It seems that the issue would trigger rarely. But I hope we can get one.

> 
> o	Is it possible to convince rcutorture to find this sort of
> 	bug?  Seems like it should be, but easy to say...

So at least the part where advance/accelerate fail is observed from time
to time. But then we must meet two more rare events:

1) The CPU failing to ACC/ADV must also fail to start the grace period because
  another CPU was faster.

2) The callbacks invocation must not run until that grace period has ended (even
  though we had a previous one completed with callbacks ready).

  Or it can run after all but at least the acceleration part of it has to
  happen after the end of the new grace period.

Perhaps all these conditions can me met more often if we overcommit the number
of vCPU. For example run 10 SRCU-P instances within 3 real CPUs. This could
introduce random breaks within the torture writers...

Just an idea...

> 
> o	Frederic, would you like to include this in your upcoming
> 	pull request?  Or does it need more time?

At least the first patch yes. It should be easily backported and
it should be enough to solve the race. I'll just wait a bit to collect
more tags.

Thanks!

> 
> 						Thanx, Paul
> 
> > ---
> > 
> > Frederic Weisbecker (5):
> >       srcu: Fix callbacks acceleration mishandling
> >       srcu: Only accelerate on enqueue time
> >       srcu: Remove superfluous callbacks advancing from srcu_start_gp()
> >       srcu: No need to advance/accelerate if no callback enqueued
> >       srcu: Explain why callbacks invocations can't run concurrently
> > 
> > 
> >  kernel/rcu/srcutree.c | 55 ++++++++++++++++++++++++++++++++++++---------------
> >  1 file changed, 39 insertions(+), 16 deletions(-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ