[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250319190950.GF26879@redhat.com>
Date: Wed, 19 Mar 2025 20:09:50 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] exit: combine work under lock in
synchronize_group_exit() and coredump_task_exit()
On 03/19, Mateusz Guzik wrote:
>
> + spin_lock_irq(&sighand->siglock);
> + synchronize_group_exit(tsk, code);
> + core_state = coredump_task_exit_prep(tsk);
> + spin_unlock_irq(&sighand->siglock);
Well, but why do we need the new (and trivial) coredump_task_exit_prep?
Can't synchronize_group_exit() be
static struct core_state *synchronize_group_exit(struct task_struct *tsk, long code)
{
struct sighand_struct *sighand = tsk->sighand;
struct signal_struct *signal = tsk->signal;
struct core_state *core_state = NULL;
spin_lock_irq(&sighand->siglock);
signal->quick_threads--;
if ((signal->quick_threads == 0) &&
!(signal->flags & SIGNAL_GROUP_EXIT)) {
signal->flags = SIGNAL_GROUP_EXIT;
signal->group_exit_code = code;
signal->group_stop_count = 0;
}
/*
* Serialize with any possible pending coredump.
* We must hold siglock around checking core_state
* and setting PF_POSTCOREDUMP. The core-inducing thread
* will increment ->nr_threads for each thread in the
* group without PF_POSTCOREDUMP set.
*/
tsk->flags |= PF_POSTCOREDUMP;
core_state = tsk->signal->core_state;
spin_unlock_irq(&sighand->siglock);
return core_state;
}
?
No need to shift spin_lock_irq(siglock) from synchronize_group_exit() to do_exit(),
no need to rename coredump_task_exit...
Oleg.
Powered by blists - more mailing lists