[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230913154907.GA26210@redhat.com>
Date: Wed, 13 Sep 2023 17:49:07 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Boqun Feng <boqun.feng@...il.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...riel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Waiman Long <longman@...hat.com>, Will Deacon <will@...nel.org>
Cc: Alexey Gladkov <legion@...nel.org>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH 0/5] turn signal_struct.stats_lock into seqcount_rwlock_t
Hello,
RFC, not for inclusion yet. Please review, at least the intent.
In particular, can you look at the changelog from the last patch?
Am I right that currently thread_group_cputime() is not RT friendly?
During the (ongoing) s/while_each_thread/for_each_thread/ conversion
I noticed that some of these users can use signal->stats_lock instead
of lock_task_sighand(). And if we change them, we can try to avoid
stats_lock under siglock in wait_task_zombie() at least.
However, signal->stats_lock is seqlock_t but I think seqcount_rwlock_t
make more sense. So let me try to turn it into seqcount_rwlock_t first.
OTOH... I am not sure I understand the value of signal->stats_lock.
I mean, do we have any numbers which prove that seqlock_t is really
"better" than the plain rwlock_t ?
So far only compile tested, and I need to re-read these changes with
a clear head. In any case, it is not that I myself like this series
very much, and most probably I did something wrong...
Oleg.
---
include/linux/sched/signal.h | 4 +-
include/linux/seqlock.h | 105 +++++++++++++++++++++++++++++++++++--------
kernel/exit.c | 12 +++--
kernel/fork.c | 3 +-
kernel/sched/cputime.c | 10 +++--
5 files changed, 106 insertions(+), 28 deletions(-)
Powered by blists - more mailing lists