[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zfrhfu9j.fsf_-_@email.froward.int.ebiederm.org>
Date: Tue, 18 Jun 2024 23:09:44 -0500
From: "Eric W. Biederman" <ebiederm@...ssion.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Tejun Heo <tj@...nel.org>,
linux-kernel@...r.kernel.org
Subject: [PATCH 10/17] signal: Only set JOBCTL_WILL_EXIT if it is not
already set
The various code paths that set JOBCTL_WILL_EXIT optimize away setting
JOBCTL_WILL_EXIT and calling signal_wake_up based on different
conditions. If the task has already committed itself to exiting by
setting JOBCTL_WILL_EXIT, setting JOBCTL_WILL_EXIT will accomplish
nothing. So instead of using any of the original conditions only set
JOBCTL_WILL_EXIT when JOBCTL_WILL_EXIT is not set.
Additionally skip task_clear_jobctl_pending once JOBCTL_WILL_EXIT has
been set as task_set_jobctl_pending won't set any pending bits after
that.
Signed-off-by: "Eric W. Biederman" <ebiederm@...ssion.com>
---
fs/coredump.c | 4 ++--
kernel/exit.c | 5 ++++-
kernel/signal.c | 21 +++++++++++++--------
3 files changed, 19 insertions(+), 11 deletions(-)
diff --git a/fs/coredump.c b/fs/coredump.c
index f3e363fa09a3..bcef41ec69a9 100644
--- a/fs/coredump.c
+++ b/fs/coredump.c
@@ -371,8 +371,8 @@ static int zap_process(struct task_struct *start, int exit_code)
start->signal->group_stop_count = 0;
for_each_thread(start, t) {
- task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
- if (!(t->flags & PF_POSTCOREDUMP)) {
+ if (!(t->jobctl & JOBCTL_WILL_EXIT)) {
+ task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
t->jobctl |= JOBCTL_WILL_EXIT;
signal_wake_up(t, 1);
}
diff --git a/kernel/exit.c b/kernel/exit.c
index 0059c60946a3..73eb3afbf083 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -801,7 +801,10 @@ static void synchronize_group_exit(struct task_struct *tsk, long code)
struct signal_struct *signal = tsk->signal;
spin_lock_irq(&sighand->siglock);
- tsk->jobctl |= JOBCTL_WILL_EXIT;
+ if (!(tsk->jobctl & JOBCTL_WILL_EXIT)) {
+ task_clear_jobctl_pending(tsk, JOBCTL_PENDING_MASK);
+ tsk->jobctl |= JOBCTL_WILL_EXIT;
+ }
signal->quick_threads--;
if ((signal->quick_threads == 0) &&
!(signal->flags & SIGNAL_GROUP_EXIT)) {
diff --git a/kernel/signal.c b/kernel/signal.c
index 12e552a35848..341717c6cc97 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -911,8 +911,10 @@ static bool prepare_signal(int sig, struct task_struct *p, bool force)
if (signal->core_state && (sig == SIGKILL)) {
struct task_struct *dumper =
signal->core_state->dumper.task;
- dumper->jobctl |= JOBCTL_WILL_EXIT;
- signal_wake_up(dumper, 1);
+ if (!(dumper->jobctl & JOBCTL_WILL_EXIT)) {
+ dumper->jobctl |= JOBCTL_WILL_EXIT;
+ signal_wake_up(dumper, 1);
+ }
}
/*
* The process is in the middle of dying, drop the signal.
@@ -1054,9 +1056,11 @@ static void complete_signal(int sig, struct task_struct *p, enum pid_type type)
signal->group_exit_code = sig;
signal->group_stop_count = 0;
__for_each_thread(signal, t) {
- task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
- t->jobctl |= JOBCTL_WILL_EXIT;
- signal_wake_up(t, 1);
+ if (!(t->jobctl & JOBCTL_WILL_EXIT)) {
+ task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
+ t->jobctl |= JOBCTL_WILL_EXIT;
+ signal_wake_up(t, 1);
+ }
}
return;
}
@@ -1378,12 +1382,13 @@ int zap_other_threads(struct task_struct *p)
p->signal->group_stop_count = 0;
for_other_threads(p, t) {
- task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
count++;
- /* Don't bother with already dead threads */
- if (t->exit_state)
+ /* Only bother with threads that might be alive */
+ if (t->jobctl & JOBCTL_WILL_EXIT)
continue;
+
+ task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
t->jobctl |= JOBCTL_WILL_EXIT;
signal_wake_up(t, 1);
}
--
2.41.0
Powered by blists - more mailing lists