lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon,  1 Jun 2009 13:38:45 -0700 (PDT)
From:	Roland McGrath <roland@...hat.com>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	Alan Cox <alan@...rguk.ukuu.org.uk>, paul@...-scientist.net,
	linux-kernel@...r.kernel.org, stable@...nel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Andi Kleen <andi@...stfloor.org>
Subject: Re: [PATCH] coredump: Retry writes where appropriate

IMHO it would certainly be wrong to have behavior differ based on what
particular method from userland was used to generate a signal.  I don't
think Alan was suggesting anything like that, just using ^C as shorthand
for what the user experience is that matters.  (Alan also mentioned ^\,
i.e. SIGQUIT.  I don't think it would make sense to have that interrupt
dumping--it's what you hit to request a core dump.)

I'm not really sure exactly how this has morphed over time, or how the
internal wrinkles were before that produced the observed behavior.  I
suspect it was always more by happenstance than careful attention.

Any sig_fatal() non-core signal that was sent before the core dumping
actually started already prevents the dump now.  The complete_signal()
code along with the signal_group_exit() check in zap_threads() should
ensure that.  This is as it should be: there was no guarantee of the
order in which the signals were processed, so the effect is that the
core dump signal was swallowed by the arrival of the just-fatal signal.

Aside from that, I see the following categories for newly-arriving signals.

1. More core-dump signals.  e.g., it was already crashing and you hit ^\
   or maybe just hit ^\ twice with a finger delay.  
2. Non-fatal signals (i.e. ones with handlers, stop signals).
3. Plain sig_fatal() non-core signals (e.g. SIGINT when not handled)
4. SIGKILL (an actual one from userland or oomkill, not group-exit)

#1 IMHO should not do anything at all.  
You are asking for a core dump, it's already doing it.

#2 should not do anything at all.
It's not really possible to suspend during the core dump, so unhandled,
unblocked stop signals can't do anything either.

#4 IMHO should always stop everything immediately.
That's what SIGKILL is for.  When userland generates a SIGKILL
explicitly, that says the top priority is to be gone and cease
consuming any resources ASAP.

#3 is the open question.  I don't feel strongly either way.

Whatever the decision on #3, we have a problem to fix for #1 and #2 at
least anyway.  These unblocked signals will cause TIF_SIGPENDING to be
set when dumping, either via recalc_sigpending() from dequeue_signal()
after the core signal is taken (more signals already pending), or via
signal_wake_up() from complete_signal() for newly-generated signals.
(PF_EXITING is not yet set to prevent it.)  This spuriously prevents
interruptible waits in the fs code or the pipe code or whatnot.

That looks simple to avoid just by clobbering ->real_blocked when we
start the core dump.  The dump-writing itself uses ->blocked, so that
actually won't pollute the data.  The less magical way that is obvious
would be to add a SIGNAL_GROUP_DUMPING flag that we set at the beginning
of the dumping, and make recalc_sigpending_tsk/complete_signal obey that.


Thanks,
Roland
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ