lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1336763121.12610.13.camel@sbsiddha-desk.sc.intel.com>
Date:	Fri, 11 May 2012 12:05:21 -0700
From:	Suresh Siddha <suresh.b.siddha@...el.com>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	torvalds@...ux-foundation.org, hpa@...or.com, mingo@...e.hu,
	linux-kernel@...r.kernel.org, suresh@...stanetworks.com
Subject: Re: [PATCH v2 2/4] coredump: ensure the fpu state is flushed for
 proper multi-threaded core dump

On Fri, 2012-05-11 at 18:51 +0200, Oleg Nesterov wrote:
> On 05/10, Suresh Siddha wrote:
> >
> > --- a/fs/exec.c
> > +++ b/fs/exec.c
> > @@ -1930,8 +1930,21 @@ static int coredump_wait(int exit_code, struct core_state *core_state)
> >  		core_waiters = zap_threads(tsk, mm, core_state, exit_code);
> >  	up_write(&mm->mmap_sem);
> >
> > -	if (core_waiters > 0)
> > +	if (core_waiters > 0) {
> > +		struct core_thread *ptr;
> > +
> >  		wait_for_completion(&core_state->startup);
> > +		/*
> > +		 * Wait for all the threads to become inactive, so that
> > +		 * all the thread context (extended register state, like
> > +		 * fpu etc) gets copied to the memory.
> > +		 */
> > +		ptr = core_state->dumper.next;
> > +		while (ptr != NULL) {
> > +			wait_task_inactive(ptr->task, 0);
> > +			ptr = ptr->next;
> > +		}
> > +	}
> 
> OK, but this adds the unnecessary penalty if we are not going to dump
> the core.

If we are not planning to dump the core, then we will not be in the
coredump_wait() right?

coredump_wait() already waits for all the threads to respond (referring
to the existing wait_for_completion() line before the proposed
addition). wait_for_completion() already ensures that the other threads
are close to schedule() with TASK_UNINTERRUPTIBLE, so most of the
penalty is already taken and in most cases, wait_task_inactive() will
return success immediately. And in the corner cases (where we hit the
bug_on before) we will spin a bit now while the other thread is still on
the rq.

> Perhaps it makes sense to create a separate helper and call it from
> do_coredump() right before "retval = binfmt->core_dump(&cprm)" ?

I didn't want to spread the core dump waits at multiple places.
coredump_wait() seems to be the natural place, as we are already waiting
for other threads to join.

thanks,
suresh

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ