[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251113073420.yko6jYcI@linutronix.de>
Date: Thu, 13 Nov 2025 08:34:20 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Yongliang Gao <leonylgao@...il.com>
Cc: rostedt@...dmis.org, mhiramat@...nel.org,
mathieu.desnoyers@...icios.com, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, frankjpliu@...cent.com,
Yongliang Gao <leonylgao@...cent.com>,
Huang Cun <cunhuang@...cent.com>
Subject: Re: [PATCH v3] trace/pid_list: optimize pid_list->lock contention
On 2025-11-13 08:02:52 [+0800], Yongliang Gao wrote:
> --- a/kernel/trace/pid_list.c
> +++ b/kernel/trace/pid_list.c
> @@ -138,14 +139,16 @@ bool trace_pid_list_is_set(struct trace_pid_list *pid_list, unsigned int pid)
> if (pid_split(pid, &upper1, &upper2, &lower) < 0)
> return false;
>
> - raw_spin_lock_irqsave(&pid_list->lock, flags);
> - upper_chunk = pid_list->upper[upper1];
> - if (upper_chunk) {
> - lower_chunk = upper_chunk->data[upper2];
> - if (lower_chunk)
> - ret = test_bit(lower, lower_chunk->data);
> - }
> - raw_spin_unlock_irqrestore(&pid_list->lock, flags);
> + do {
> + seq = read_seqcount_begin(&pid_list->seqcount);
> + ret = false;
> + upper_chunk = pid_list->upper[upper1];
> + if (upper_chunk) {
> + lower_chunk = upper_chunk->data[upper2];
> + if (lower_chunk)
> + ret = test_bit(lower, lower_chunk->data);
> + }
> + } while (read_seqcount_retry(&pid_list->seqcount, seq));
How is this better? Any numbers?
If the write side is busy and the lock is handed over from one CPU to
another then it is possible that the reader spins here and does several
loops, right?
And in this case, how accurate would it be? I mean the result could
change right after the sequence here is completed because the write side
got active again. How bad would it be if there would be no locking and
RCU ensures that the chunks (and data) don't disappear while looking at
it?
> return ret;
> }
Sebastian
Powered by blists - more mailing lists