[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250804122758.620f934a@gandalf.local.home>
Date: Mon, 4 Aug 2025 12:27:58 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Yunseong Kim <ysk@...lloc.com>
Cc: Dmitry Vyukov <dvyukov@...gle.com>, Andrey Konovalov
<andreyknvl@...il.com>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Thomas Gleixner <tglx@...utronix.de>, Sebastian Andrzej Siewior
<bigeasy@...utronix.de>, Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Byungchul Park <byungchul@...com>, max.byungchul.park@...il.com, Yeoreum
Yun <yeoreum.yun@....com>, ppbuk5246@...il.com, linux-usb@...r.kernel.org,
linux-rt-devel@...ts.linux.dev, syzkaller@...glegroups.com,
linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH 1/4] kcov: Use raw_spinlock_t for kcov->lock and
kcov_remote_lock
On Sun, 3 Aug 2025 07:20:43 +0000
Yunseong Kim <ysk@...lloc.com> wrote:
> The locks kcov->lock and kcov_remote_lock can be acquired from
> atomic contexts, such as instrumentation hooks invoked from interrupt
> handlers.
>
> On PREEMPT_RT-enabled kernels, spinlock_t is typically implemented
On PREEMPT_RT is implemented as a sleeping lock. You don't need to say
"typically".
> as a sleeping lock (e.g., mapped to an rt_mutex). Acquiring such a
> lock in atomic context, where sleeping is not allowed, can lead to
> system hangs or crashes.
>
> To avoid this, convert both locks to raw_spinlock_t, which always
> provides non-sleeping spinlock semantics regardless of preemption model.
>
> Signed-off-by: Yunseong Kim <ysk@...lloc.com>
> ---
> kernel/kcov.c | 58 +++++++++++++++++++++++++--------------------------
> 1 file changed, 29 insertions(+), 29 deletions(-)
>
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index 187ba1b80bda..7d9b53385d81 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -54,7 +54,7 @@ struct kcov {
> */
> refcount_t refcount;
> /* The lock protects mode, size, area and t. */
> - spinlock_t lock;
> + raw_spinlock_t lock;
> enum kcov_mode mode;
> /* Size of arena (in long's). */
> unsigned int size;
> @@ -84,7 +84,7 @@ struct kcov_remote {
> struct hlist_node hnode;
> };
>
> -static DEFINE_SPINLOCK(kcov_remote_lock);
> +static DEFINE_RAW_SPINLOCK(kcov_remote_lock);
> static DEFINE_HASHTABLE(kcov_remote_map, 4);
> static struct list_head kcov_remote_areas = LIST_HEAD_INIT(kcov_remote_areas);
>
> @@ -406,7 +406,7 @@ static void kcov_remote_reset(struct kcov *kcov)
> struct hlist_node *tmp;
> unsigned long flags;
>
> - spin_lock_irqsave(&kcov_remote_lock, flags);
> + raw_spin_lock_irqsave(&kcov_remote_lock, flags);
Not related to these patches, but have you thought about converting some of
these locks over to the "guard()" infrastructure provided by cleanup.h?
> hash_for_each_safe(kcov_remote_map, bkt, tmp, remote, hnode) {
> if (remote->kcov != kcov)
> continue;
Reviewed-by: Steven Rostedt (Google) <rostedt@...dmis.org>
-- Steve
Powered by blists - more mailing lists