[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <75b1d69e-b93f-43ba-8289-9465b9fa39a8@paulmck-laptop>
Date: Mon, 28 Aug 2023 04:54:59 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Huacai Chen <chenhuacai@...nel.org>
Cc: Joel Fernandes <joel@...lfernandes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Z qiang <qiang.zhang1211@...il.com>,
Huacai Chen <chenhuacai@...ngson.cn>,
Frederic Weisbecker <frederic@...nel.org>,
Neeraj Upadhyay <quic_neeraju@...cinc.com>,
Josh Triplett <josh@...htriplett.org>,
Boqun Feng <boqun.feng@...il.com>,
Ingo Molnar <mingo@...nel.org>,
John Stultz <jstultz@...gle.com>,
Stephen Boyd <sboyd@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org, Binbin Zhou <zhoubinbin@...ngson.cn>
Subject: Re: [PATCH V4 2/2] rcu: Update jiffies in rcu_cpu_stall_reset()
On Mon, Aug 28, 2023 at 07:30:43PM +0800, Huacai Chen wrote:
> Hi, Paul and Joel,
>
> On Mon, Aug 28, 2023 at 6:47 PM Paul E. McKenney <paulmck@...nel.org> wrote:
> >
> > On Sun, Aug 27, 2023 at 06:11:40PM -0400, Joel Fernandes wrote:
> > > On Sun, Aug 27, 2023 at 1:51 AM Huacai Chen <chenhuacai@...nel.org> wrote:
> > > [..]
> > > > > > > > The only way I know of to avoid these sorts of false positives is for
> > > > > > > > the user to manually suppress all timeouts (perhaps using a kernel-boot
> > > > > > > > parameter for your early-boot case), do the gdb work, and then unsuppress
> > > > > > > > all stalls. Even that won't work for networking, because the other
> > > > > > > > system's clock will be running throughout.
> > > > > > > >
> > > > > > > > In other words, from what I know now, there is no perfect solution.
> > > > > > > > Therefore, there are sharp limits to the complexity of any solution that
> > > > > > > > I will be willing to accept.
> > > > > > > I think the simplest solution is (I hope Joel will not angry):
> > > > > >
> > > > > > Not angry at all, just want to help. ;-). The problem is the 300*HZ solution
> > > > > > will also effect the VM workloads which also do a similar reset. Allow me few
> > > > > > days to see if I can take a shot at fixing it slightly differently. I am
> > > > > > trying Paul's idea of setting jiffies at a later time. I think it is doable.
> > > > > > I think the advantage of doing this is it will make stall detection more
> > > > > > robust in this face of these gaps in jiffie update. And that solution does
> > > > > > not even need us to rely on ktime (and all the issues that come with that).
> > > > > >
> > > > >
> > > > > I wrote a patch similar to Paul's idea and sent it out for review, the
> > > > > advantage being it purely is based on jiffies. Could you try it out
> > > > > and let me know?
> > > > If you can cc my gmail <chenhuacai@...il.com>, that could be better.
> > >
> > > Sure, will do.
> > >
> > > > I have read your patch, maybe the counter (nr_fqs_jiffies_stall)
> > > > should be atomic_t and we should use atomic operation to decrement its
> > > > value. Because rcu_gp_fqs() can be run concurrently, and we may miss
> > > > the (nr_fqs == 1) condition.
> > >
> > > I don't think so. There is only 1 place where RMW operation happens
> > > and rcu_gp_fqs() is called only from the GP kthread. So a concurrent
> > > RMW (and hence a lost update) is not possible.
> >
> > Huacai, is your concern that the gdb user might have created a script
> > (for example, printing a variable or two, then automatically continuing),
> > so that breakpoints could happen in quick successsion, such that the
> > second breakpoint might run concurrently with rcu_gp_fqs()?
> >
> > If this can really happen, the point that Joel makes is a good one, namely
> > that rcu_gp_fqs() is single-threaded and (absent rcutorture) runs only
> > once every few jiffies. And gdb breakpoints, even with scripting, should
> > also be rather rare. So if this is an issue, a global lock should do the
> > trick, perhaps even one of the existing locks in the rcu_state structure.
> > The result should then be just as performant/scalable and a lot simpler
> > than use of atomics.
>
> Sorry, I made a mistake. Yes, there is no concurrent issue, and this
> approach probably works. But I have another problem: how to ensure
> that there is a jiffies update in three calls to rcu_gp_fqs()? Or in
> other word, is three also a magic number here?
Each of the three calls to rcu_gp_fqs() involves a wakeup of and a
context switch to RCU's grace-period kthread, each of which should be
sufficient to update jiffies if initially in an out-of-date-jiffies state.
The three is to some extent magic, the idea being to avoid a situation
where an currently running FQS reenables stall warnings immediately
after gdb disables them.
Obviously, if your testing shows that some other value works better,
please do let us know so that we can update! But we have to start
somewhere.
> And I rechecked the commit message of a80be428fbc1f1f3bc9e ("rcu: Do
> not disable GP stall detection in rcu_cpu_stall_reset()"). I don't
> know why Sergey said that the original code disables stall-detection
> forever, in fact it only disables the detection in the current GP.
Well, it does disable stall detection forever in the case where the
current grace period lasts forever, which if I recall correctly was the
case that Sergey was encountering.
Thanx, Paul
> Huacai
>
> >
> > > Could you test the patch for the issue you are seeing and provide your
> > > Tested-by tag? Thanks,
> >
> > Either way, testing would of course be very good! ;-)
> >
> > Thanx, Paul
Powered by blists - more mailing lists