[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZrFHQbmkcGc6DLad@LeoBras>
Date: Mon, 5 Aug 2024 18:42:25 -0300
From: Leonardo Bras <leobras@...hat.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Leonardo Bras <leobras@...hat.com>,
neeraj.upadhyay@...nel.org,
linux-kernel@...r.kernel.org,
rcu@...r.kernel.org,
kernel-team@...a.com,
rostedt@...dmis.org,
mingo@...nel.org,
peterz@...radead.org,
imran.f.khan@...cle.com,
riel@...riel.com,
tglx@...utronix.de
Subject: Re: [PATCH v2 2/3] locking/csd_lock: Provide an indication of ongoing CSD-lock stall
On Wed, Jul 31, 2024 at 03:08:29PM -0700, Paul E. McKenney wrote:
> On Wed, Jul 31, 2024 at 06:35:35PM -0300, Leonardo Bras wrote:
> > On Mon, Jul 22, 2024 at 07:07:34PM +0530, neeraj.upadhyay@...nel.org wrote:
> > > From: "Paul E. McKenney" <paulmck@...nel.org>
> > >
> > > If a CSD-lock stall goes on long enough, it will cause an RCU CPU
> > > stall warning. This additional warning provides much additional
> > > console-log traffic and little additional information. Therefore,
> > > provide a new csd_lock_is_stuck() function that returns true if there
> > > is an ongoing CSD-lock stall. This function will be used by the RCU
> > > CPU stall warnings to provide a one-line indication of the stall when
> > > this function returns true.
> >
> > I think it would be nice to also add the RCU usage here, as for the
> > function being declared but not used.
>
Hi Paul,
> These are external functions, and the commit that uses it is just a few
> farther along in the stack.
Oh, I see. I may have received just part of this patchset.
I found it weird a series of 3 to have a 4th patch, and did not think that
it could have more, so I did not check the ML. :)
> Or do we now have some tool that complains
> if an external function is not used anywhere?
Not really, I was just interested in the patchset but it made no sense in
my head to add a function & not use it. On top of that, it did not occur to
me that it was getting included on a different patchset.
Thanks!
Leo
>
> > > [ neeraj.upadhyay: Apply Rik van Riel feedback. ]
> > >
> > > Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> > > Cc: Imran Khan <imran.f.khan@...cle.com>
> > > Cc: Ingo Molnar <mingo@...nel.org>
> > > Cc: Leonardo Bras <leobras@...hat.com>
> > > Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>
> > > Cc: Rik van Riel <riel@...riel.com>
> > > Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@...nel.org>
> > > ---
> > > include/linux/smp.h | 6 ++++++
> > > kernel/smp.c | 16 ++++++++++++++++
> > > 2 files changed, 22 insertions(+)
> > >
> > > diff --git a/include/linux/smp.h b/include/linux/smp.h
> > > index fcd61dfe2af3..3871bd32018f 100644
> > > --- a/include/linux/smp.h
> > > +++ b/include/linux/smp.h
> > > @@ -294,4 +294,10 @@ int smpcfd_prepare_cpu(unsigned int cpu);
> > > int smpcfd_dead_cpu(unsigned int cpu);
> > > int smpcfd_dying_cpu(unsigned int cpu);
> > >
> > > +#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
> > > +bool csd_lock_is_stuck(void);
> > > +#else
> > > +static inline bool csd_lock_is_stuck(void) { return false; }
> > > +#endif
> > > +
> > > #endif /* __LINUX_SMP_H */
> > > diff --git a/kernel/smp.c b/kernel/smp.c
> > > index 81f7083a53e2..9385cc05de53 100644
> > > --- a/kernel/smp.c
> > > +++ b/kernel/smp.c
> > > @@ -207,6 +207,19 @@ static int csd_lock_wait_getcpu(call_single_data_t *csd)
> > > return -1;
> > > }
> > >
> > > +static atomic_t n_csd_lock_stuck;
> > > +
> > > +/**
> > > + * csd_lock_is_stuck - Has a CSD-lock acquisition been stuck too long?
> > > + *
> > > + * Returns @true if a CSD-lock acquisition is stuck and has been stuck
> > > + * long enough for a "non-responsive CSD lock" message to be printed.
> > > + */
> > > +bool csd_lock_is_stuck(void)
> > > +{
> > > + return !!atomic_read(&n_csd_lock_stuck);
> > > +}
> > > +
> > > /*
> > > * Complain if too much time spent waiting. Note that only
> > > * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
> > > @@ -228,6 +241,7 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
> > > cpu = csd_lock_wait_getcpu(csd);
> > > pr_alert("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d released the lock.\n",
> > > *bug_id, raw_smp_processor_id(), cpu);
> > > + atomic_dec(&n_csd_lock_stuck);
> > > return true;
> > > }
> > >
> > > @@ -251,6 +265,8 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
> > > pr_alert("csd: %s non-responsive CSD lock (#%d) on CPU#%d, waiting %lld ns for CPU#%02d %pS(%ps).\n",
> > > firsttime ? "Detected" : "Continued", *bug_id, raw_smp_processor_id(), (s64)ts_delta,
> > > cpu, csd->func, csd->info);
> > > + if (firsttime)
> > > + atomic_inc(&n_csd_lock_stuck);
> > > /*
> > > * If the CSD lock is still stuck after 5 minutes, it is unlikely
> > > * to become unstuck. Use a signed comparison to avoid triggering
> > > --
> > > 2.40.1
> > >
> >
> > IIUC we have a single atomic counter for the whole system, which is
> > modified in csd_lock_wait_toolong() and read in RCU stall warning.
> >
> > I think it should not be an issue regarding cache bouncing because in worst
> > case scenario we would have 2 modify / cpu each csd_lock_timeout (which is
> > 5 seconds by default).
>
> If it does become a problem, there are ways of taking care of it.
> Just a little added complexity. ;-)
>
> > Thanks!
>
> And thank you for looking this over!
>
> Thanx, Paul
>
Powered by blists - more mailing lists