[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZqquJwsH1vqsZhD2@LeoBras>
Date: Wed, 31 Jul 2024 18:35:35 -0300
From: Leonardo Bras <leobras@...hat.com>
To: neeraj.upadhyay@...nel.org
Cc: Leonardo Bras <leobras@...hat.com>,
linux-kernel@...r.kernel.org,
rcu@...r.kernel.org,
kernel-team@...a.com,
rostedt@...dmis.org,
mingo@...nel.org,
peterz@...radead.org,
paulmck@...nel.org,
imran.f.khan@...cle.com,
riel@...riel.com,
tglx@...utronix.de
Subject: Re: [PATCH v2 2/3] locking/csd_lock: Provide an indication of ongoing CSD-lock stall
On Mon, Jul 22, 2024 at 07:07:34PM +0530, neeraj.upadhyay@...nel.org wrote:
> From: "Paul E. McKenney" <paulmck@...nel.org>
>
> If a CSD-lock stall goes on long enough, it will cause an RCU CPU
> stall warning. This additional warning provides much additional
> console-log traffic and little additional information. Therefore,
> provide a new csd_lock_is_stuck() function that returns true if there
> is an ongoing CSD-lock stall. This function will be used by the RCU
> CPU stall warnings to provide a one-line indication of the stall when
> this function returns true.
I think it would be nice to also add the RCU usage here, as for the
function being declared but not used.
>
> [ neeraj.upadhyay: Apply Rik van Riel feedback. ]
>
> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> Cc: Imran Khan <imran.f.khan@...cle.com>
> Cc: Ingo Molnar <mingo@...nel.org>
> Cc: Leonardo Bras <leobras@...hat.com>
> Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>
> Cc: Rik van Riel <riel@...riel.com>
> Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@...nel.org>
> ---
> include/linux/smp.h | 6 ++++++
> kernel/smp.c | 16 ++++++++++++++++
> 2 files changed, 22 insertions(+)
>
> diff --git a/include/linux/smp.h b/include/linux/smp.h
> index fcd61dfe2af3..3871bd32018f 100644
> --- a/include/linux/smp.h
> +++ b/include/linux/smp.h
> @@ -294,4 +294,10 @@ int smpcfd_prepare_cpu(unsigned int cpu);
> int smpcfd_dead_cpu(unsigned int cpu);
> int smpcfd_dying_cpu(unsigned int cpu);
>
> +#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
> +bool csd_lock_is_stuck(void);
> +#else
> +static inline bool csd_lock_is_stuck(void) { return false; }
> +#endif
> +
> #endif /* __LINUX_SMP_H */
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 81f7083a53e2..9385cc05de53 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -207,6 +207,19 @@ static int csd_lock_wait_getcpu(call_single_data_t *csd)
> return -1;
> }
>
> +static atomic_t n_csd_lock_stuck;
> +
> +/**
> + * csd_lock_is_stuck - Has a CSD-lock acquisition been stuck too long?
> + *
> + * Returns @true if a CSD-lock acquisition is stuck and has been stuck
> + * long enough for a "non-responsive CSD lock" message to be printed.
> + */
> +bool csd_lock_is_stuck(void)
> +{
> + return !!atomic_read(&n_csd_lock_stuck);
> +}
> +
> /*
> * Complain if too much time spent waiting. Note that only
> * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
> @@ -228,6 +241,7 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
> cpu = csd_lock_wait_getcpu(csd);
> pr_alert("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d released the lock.\n",
> *bug_id, raw_smp_processor_id(), cpu);
> + atomic_dec(&n_csd_lock_stuck);
> return true;
> }
>
> @@ -251,6 +265,8 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
> pr_alert("csd: %s non-responsive CSD lock (#%d) on CPU#%d, waiting %lld ns for CPU#%02d %pS(%ps).\n",
> firsttime ? "Detected" : "Continued", *bug_id, raw_smp_processor_id(), (s64)ts_delta,
> cpu, csd->func, csd->info);
> + if (firsttime)
> + atomic_inc(&n_csd_lock_stuck);
> /*
> * If the CSD lock is still stuck after 5 minutes, it is unlikely
> * to become unstuck. Use a signed comparison to avoid triggering
> --
> 2.40.1
>
IIUC we have a single atomic counter for the whole system, which is
modified in csd_lock_wait_toolong() and read in RCU stall warning.
I think it should not be an issue regarding cache bouncing because in worst
case scenario we would have 2 modify / cpu each csd_lock_timeout (which is
5 seconds by default).
Thanks!
Leo
Powered by blists - more mailing lists