[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877celinkf.fsf_-_@email.froward.int.ebiederm.org>
Date: Tue, 18 Jun 2024 23:06:08 -0500
From: "Eric W. Biederman" <ebiederm@...ssion.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Tejun Heo <tj@...nel.org>,
linux-kernel@...r.kernel.org
Subject: [PATCH 03/17] coredump: Consolidate the work to allow SIGKILL
during coredumps
Consolidate all of the work to allow SIGKILL during coredumps in
zap_threads. Move the comment explaning what is happening from
zap_process. Clear the per task pending SIGKILL to ensure that
__fatal_signal_pending returns false, and that interruptible waits
continue to wait during coredump generation. Move the atomic_set
before the comment as setting nr_threads has nothing to do with
allowing SIGKILL.
With the work of allowing SIGKILL consolidated in zap_threads make the
process tear-down in zap_process as much like the other places that
set SIGKILL as possible.
Include current in the set of processes being asked to exit.
With the per task SIGKILL cleared in zap_threads the current process
remains killable as it performs the coredump. Removing the
only reason I know of for not current to exit.
Separately count the tasks that will stop in coredump_task_exit that
coredump_wait needs to wait for. Which tasks to count is different
from which tasks to signal, and the logic need to remain even when
task exiting is unified.
Signed-off-by: "Eric W. Biederman" <ebiederm@...ssion.com>
---
fs/coredump.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/fs/coredump.c b/fs/coredump.c
index a57a06b80f57..be0405346882 100644
--- a/fs/coredump.c
+++ b/fs/coredump.c
@@ -366,18 +366,17 @@ static int zap_process(struct task_struct *start, int exit_code)
struct task_struct *t;
int nr = 0;
- /* Allow SIGKILL, see prepare_signal() */
start->signal->flags = SIGNAL_GROUP_EXIT;
start->signal->group_exit_code = exit_code;
start->signal->group_stop_count = 0;
for_each_thread(start, t) {
task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
- if (t != current && !(t->flags & PF_POSTCOREDUMP)) {
+ if (!(t->flags & PF_POSTCOREDUMP)) {
sigaddset(&t->pending.signal, SIGKILL);
signal_wake_up(t, 1);
- nr++;
}
+ nr += (t != current) && !(t->flags & PF_POSTCOREDUMP);
}
return nr;
@@ -393,9 +392,12 @@ static int zap_threads(struct task_struct *tsk,
if (!(signal->flags & SIGNAL_GROUP_EXIT) && !signal->group_exec_task) {
signal->core_state = core_state;
nr = zap_process(tsk, exit_code);
+ atomic_set(&core_state->nr_threads, nr);
+
+ /* Allow SIGKILL, see prepare_signal() */
clear_tsk_thread_flag(tsk, TIF_SIGPENDING);
+ sigdelset(&tsk->pending.signal, SIGKILL);
tsk->flags |= PF_DUMPCORE;
- atomic_set(&core_state->nr_threads, nr);
}
spin_unlock_irq(&tsk->sighand->siglock);
return nr;
--
2.41.0
Powered by blists - more mailing lists