lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0901092000130.6528@localhost.localdomain>
Date:	Fri, 9 Jan 2009 20:05:27 -0800 (PST)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Nicholas Miell <nmiell@...cast.net>
cc:	Ingo Molnar <mingo@...e.hu>, jim owens <jowens@...com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Chris Mason <chris.mason@...cle.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	paulmck@...ux.vnet.ibm.com, Gregory Haskins <ghaskins@...ell.com>,
	Matthew Wilcox <matthew@....cx>,
	Andi Kleen <andi@...stfloor.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	linux-btrfs <linux-btrfs@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Nick Piggin <npiggin@...e.de>,
	Peter Morreale <pmorreale@...ell.com>,
	Sven Dietrich <SDietrich@...ell.com>
Subject: Re: [patch] measurements, numbers about CONFIG_OPTIMIZE_INLINING=y
 impact



On Fri, 9 Jan 2009, Nicholas Miell wrote:
> 
> It's only too big if you always keep it in memory, and I wasn't
> suggesting that.

Umm. We're talking kernel panics here. If it's not in memory, it doesn't 
exist as far as the kernel is concerned.

If it doesn't exist, it cannot be reported.

> My point was that you can get completely accurate stack traces in the
> face of gcc's inlining, and that blaming gcc because you can't get good
> stack traces because the kernel's debugging infrastructure isn't up to
> snuff isn't exactly fair.

No. I'm blaming inlining for making debugging harder.

And that's ok - IF IT IS WORTH IT.

It's not. Gcc inlining decisions suck. gcc inlines stuff that doesn't 
really help from being inlined, and doesn't inline stuff that _does_.

What's so hard to accept in that?

> And this is where we disagree. I believe that crash dumps should be the
> norm and all the reasons you have against crash dumps in general are in
> fact reasons against Linux's sub-par implementation of crash dumps in
> specific.

Good luck with that. Go ahead and try it.  You'll find it wasn't so easy 
after all.

> So, here I am, a non-enterprise end user with a non-stale kernel who'd
> love to be able to give you a crash dump (or, more likely, a stack trace
> created from that crash dump), but I can't because Linux crash dumps are
> stuck in the enterprise ghetto.

No, you're stuck because you apparently have your mind stuck on a 
crash-dump, and aren't willing to look at alternatives.

You could use a network console. Trust me - if you can't set up a network 
console, you have no business mucking around with crash dumps.

And if the crash is hard enough that you can't any output from that, 
again, a crash dump wouldn't exactly help, would it?

> Hell, I'd be happy if I could get the the normal panic text written to
> disk, but since the hard part is the actual writing to disk, there's no
> reason not to do the full crash dump if you can.

Umm. And why do you think the two have anything to do with each other?

Only insane people want the kernel to write to disk when it has problems. 
Sane people try to write to something that doesn't potentially overwrite 
their data. Like the network.

Which is there. Try it. Trust me, it's a _hell_ of a lot more likely to 
wotk than a crash dump.

			Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ