lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a56dfcf00708010111n4b862a11wf91fe9677ee46143@mail.gmail.com>
Date:	Wed, 1 Aug 2007 04:11:23 -0400
From:	"Dan Merillat" <dan.merillat@...il.com>
To:	"Eric Sandeen" <sandeen@...deen.net>
Cc:	"Satyam Sharma" <satyam.sharma@...il.com>,
	"Alan Cox" <alan@...rguk.ukuu.org.uk>,
	"Andrea Arcangeli" <andrea@...e.de>,
	"Matt Mackall" <mpm@...enic.com>,
	"Rene Herman" <rene.herman@...il.com>,
	"Ray Lee" <ray-lk@...rabbit.org>, "Bodo Eggert" <7eggert@....de>,
	"Jeremy Fitzhardinge" <jeremy@...p.org>,
	"Jesper Juhl" <jesper.juhl@...il.com>,
	"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
	"William Lee Irwin III" <wli@...omorphy.com>,
	"David Chinner" <dgc@....com>,
	"Arjan van de Ven" <arjan@...radead.org>
Subject: Re: [PATCH][RFC] 4K stacks default, not a debug thing any more...?

On 7/31/07, Eric Sandeen <sandeen@...deen.net> wrote:

> No, what I had did only that, so it was still a matter of probabilities...

How expensive would it be to allocate two , then use the MMU mark the
second page unwritable? Hardware wise it should be possible,  (for
constant 4k pagesizes, I have not worked with variable pagesize MMUs)
and since it's a per-context-switch constant operation, it would be a
special case in the fault handler rather then adding another entry to
the VM for every process.

Using large hardware pages to cover the kernel mapping could be worked
around by leaving the area where the current process stack resides
mapped via 4k pages.  Of course, I haven't touched a modern PC MMU in
ages, so I could be missing something fundamentally difficult.

The other issue is with the layered IO design - no matter what we
configure the stack size to, it is still possible to create a set of
translation layers that will cause it to crash regularly:  XFS on
dm_crypt on loop on XFS on dm_crypt on loop on ad infinitum.

That said, I'm missing something here - why is the stack growing?
Filesystems should be issuing bios with callbacks, so they should be
back off the stack, same with dm, loop, etc.   Am I missing step where
they use a wrapper function that pretends to be syncronous?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ