lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 Jul 2009 17:37:36 +0200
From:	Oleg Nesterov <oleg@...hat.com>
To:	Neil Horman <nhorman@...driver.com>
Cc:	linux-kernel@...r.kernel.org, alan@...rguk.ukuu.org.uk,
	andi@...stfloor.org, akpm@...ux-foundation.org,
	earl_chew@...lent.com, Roland McGrath <roland@...hat.com>
Subject: Re: [PATCH 3/3] exec: Allow do_coredump to wait for user space
	pipe readers to complete (v6)

On 07/02, Neil Horman wrote:
>
> On Thu, Jul 02, 2009 at 01:36:10PM +0200, Oleg Nesterov wrote:
> > On 07/02, Neil Horman wrote:
> > >
> > > > >  void do_coredump(long signr, int exit_code, struct pt_regs *regs)
> > > > >  {
> > > > >  	struct core_state core_state;
> > > > > @@ -1862,6 +1886,8 @@ void do_coredump(long signr, int exit_code, struct pt_regs *regs)
> > > > >  		current->signal->group_exit_code |= 0x80;
> > > > >
> > > > >  close_fail:
> > > > > +	if (ispipe && core_pipe_limit)
> > > > > +		wait_for_dump_helpers(file);
> > > >
> > > > Oh. I thought I misread the first version, but now I see I got it right.
> > > > And now I confused again.
> > > >
> > > > So, we only wait if core_pipe_limit != 0. Why?
> > > >
> > > > The previous version, v4, called wait_for_dump_helpers() unconditionally.
> > > > And this looks more right to me. Once again, even without wait_for_dump()
> > > > the coredumping process can't be reaped until core_pattern app reads all
> > > > data from the pipe.
> > > >
> > > > I won't insist. However, anybody else please take a look?
> > > >
> > > > core_pipe_limit != 0 limits the number of coredump-via-pipe in flight, OK.
> > > >
> > > > But, should wait_for_dump_helpers() depend on core_limit_pipe != 0?
> > > >
> > > I messed this up in v4 and am fixing it here.  If you read the documentation I
> > > added in patch 2, you can see that my intent with the core_pipe_limit sysctl was
> > > to designate 0 as a special value allowing unlimited parallel core_dumps in
> > > which we do not wait for any user space process completion
> >
> > We do wait in any case. If core_dump app doesn't read the data from the pipe
> > ->core_dump() can't complete. OK, unless all data fits into pipe buffers.
> >
> Thats true, but consider the converse situation, in which the userspace app does
> read the pipe, so that we return from ->core_dump().  If the user app then
> queries the /proc/<pid> directory of the crashing  process we are open to race.
> Thats what this wait helps with.

Sure. And I do agree wait_for_dump_helpers() is useful.

And I do agree with core_pipe_limit.

> > > (so that current
> > > system behavior can be maintained, which I think is desireable for those user
> > > space helpers who don't need access to a crashing processes meta data via proc.
> > > If you look above in the second patch where we do an atomic_inc_return, you'll
> > > see that we only honor the core_pipe_limit value if its non-zero.  This addional
> > > check restores the behavior I documented in that patch.
> >
> > If you you look at my message you will see I am not arguing, but I am asking
> > others to ack this behaviour.
> >
> Ok, but you asked the question as to why I added that check, this is the answer.

And I still can't understand your answer.

My question is: why don't we do wait_for_dump_helpers() if core_pipe_limit == 0.

Because I don't really understand how core_pipe_limit connected to
wait_for_dump_helpers(). Because, once again, we have to wait for core_pattern
app in any case.

> > As for implementation, my only complaint is that wait_for_dump_helpers() lacks
> > signal_pending() check, this wasn't answered.
> >
> I'll have to defer to others on this.  It seems to me that, given that we are
> waiting here in the context of process that has already received a fatal signal,
> theres no opportunity to handle subsequent signals,

Yes, we can't handle subsequent signals, but this is not needed.

> I agree we busy wait if a signal is
> pending,

Yes. And this is not nice.

> but if we drop out of the loop if a signal is pending then we cancel
> the wait early, leading to the early removal of the /proc file for the crashing
> process.

Yes. But if signal_pending() == T we already have other problems. In
particular pipe_write() can fail, and in this case the coredump won't
complete anyway.

Hopefully this will be changed soon: the coredumping task should ignore
ignore all signals except SIGKILL which should terminate the coredump,
and in this case of course wait_for_dump_helpers() should abort.

> Could we add a schedule to the loop to allow the user space helper to
> run if a signal is pending instead of just dropping the loop?

Not sure I understand what you mean, but no, we can't return to user-space
to handle the signal. (and just in case, pipe_wait() does schedule()).

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ