lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250319192533.GG26879@redhat.com>
Date: Wed, 19 Mar 2025 20:25:33 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] exit: combine work under lock in
 synchronize_group_exit() and coredump_task_exit()

On 03/19, Oleg Nesterov wrote:
>
> On 03/19, Mateusz Guzik wrote:
> >
> > +	spin_lock_irq(&sighand->siglock);
> > +	synchronize_group_exit(tsk, code);
> > +	core_state = coredump_task_exit_prep(tsk);
> > +	spin_unlock_irq(&sighand->siglock);
>
> Well, but why do we need the new (and trivial) coredump_task_exit_prep?
>
> Can't synchronize_group_exit() be
>
> 	static struct core_state *synchronize_group_exit(struct task_struct *tsk, long code)
> 	{
> 		struct sighand_struct *sighand = tsk->sighand;
> 		struct signal_struct *signal = tsk->signal;
> 		struct core_state *core_state = NULL;
>
> 		spin_lock_irq(&sighand->siglock);
> 		signal->quick_threads--;
> 		if ((signal->quick_threads == 0) &&
> 		    !(signal->flags & SIGNAL_GROUP_EXIT)) {
> 			signal->flags = SIGNAL_GROUP_EXIT;
> 			signal->group_exit_code = code;
> 			signal->group_stop_count = 0;
> 		}
> 		/*
> 		 * Serialize with any possible pending coredump.
> 		 * We must hold siglock around checking core_state
> 		 * and setting PF_POSTCOREDUMP.  The core-inducing thread
> 		 * will increment ->nr_threads for each thread in the
> 		 * group without PF_POSTCOREDUMP set.
> 		 */
> 		tsk->flags |= PF_POSTCOREDUMP;
> 		core_state = tsk->signal->core_state;
> 		spin_unlock_irq(&sighand->siglock);
>
> 		return core_state;
> 	}
>
> ?

Or even better,


static void synchronize_group_exit(struct task_struct *tsk, long code)
{
	struct sighand_struct *sighand = tsk->sighand;
	struct signal_struct *signal = tsk->signal;
	struct core_state *core_state = NULL;

	spin_lock_irq(&sighand->siglock);
	signal->quick_threads--;
	if ((signal->quick_threads == 0) &&
	    !(signal->flags & SIGNAL_GROUP_EXIT)) {
		signal->flags = SIGNAL_GROUP_EXIT;
		signal->group_exit_code = code;
		signal->group_stop_count = 0;
	}
	/*
	 * Serialize with any possible pending coredump.
	 * We must hold siglock around checking core_state
	 * and setting PF_POSTCOREDUMP.  The core-inducing thread
	 * will increment ->nr_threads for each thread in the
	 * group without PF_POSTCOREDUMP set.
	 */
	tsk->flags |= PF_POSTCOREDUMP;
	core_state = tsk->signal->core_state;
	spin_unlock_irq(&sighand->siglock);

	if (core_state)
		coredump_task_exit(tsk, core_state);
}

> No need to shift spin_lock_irq(siglock) from synchronize_group_exit() to do_exit(),
> no need to rename coredump_task_exit...

do_exit() is already big enough...

Oleg.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ