lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070716233849.GE11115@waste.org>
Date:	Mon, 16 Jul 2007 18:38:50 -0500
From:	Matt Mackall <mpm@...enic.com>
To:	Rene Herman <rene.herman@...il.com>
Cc:	Jeremy Fitzhardinge <jeremy@...p.org>,
	Jesper Juhl <jesper.juhl@...il.com>,
	Ray Lee <ray-lk@...rabbit.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	William Lee Irwin III <wli@...omorphy.com>,
	David Chinner <dgc@....com>
Subject: Re: [PATCH][RFC] 4K stacks default, not a debug thing any more...?

On Sun, Jul 15, 2007 at 12:19:15AM +0200, Rene Herman wrote:
> On 07/14/2007 09:17 PM, Matt Mackall wrote:
> 
> >On Fri, Jul 13, 2007 at 03:20:54PM +0200, Rene Herman wrote:
> 
> >>As far as I'm aware, the actual reason for 4K stacks is that after the 
> >>system has been up and running for some time getting "1 physically 
> >>contiguous pages" becomes significantly easier than 2 which wouldn't be 
> >>arbitrary.
> >
> >If there are exactly two free pages in the system, the odds of them
> >being buddies (ie adjacent AND properly aligned) is quite small. The
> >available page pool has to grow quite a bit before the availability of
> >order-1 page pairs approaches 100%. 
> >
> >So if we fail to allocate an 8k stack when we could have allocated a
> >4k stack, we're almost certainly failing significantly prematurely.
> 
> Quite. Ofcourse, saying "our stacks are 1 page" would be the by far easiest 
> solution to that. Personally, I've been running with 4K stacks exclusively 
> on a variety of machines for quite some time now, but I can't say I'm all 
> too adventurous with respect to filesystems (especially) so I'm not sure 
> how many problems remain with 4K stacks. I did recently see Andrew Morton 
> say that problems _do_ still exist. If it's just XFS -- well, heck...

One long-standing problem is DM/LVM. That -may- be fixed now, but I
suspect issues remain.
 
> >As I've pointed out before, it's fairly easy to make our stack
> >growable with a trampoline in the troublesome paths. Something like:
> >
> >int growstack(int headroom, int func, void *data)
> >{
> >	void *new_stack;
> >	int ret;
> >
> >	if (likely(available_stack() > headroom))
> >		return func(data);
> >
> >#ifdef CONFIG_GROWSTACK_STATS
> >	/* gather statistics about stack-heavy paths */
> >#endif
> >	/* warn/abort if we're recursing too deeply */
> >
> >	new_stack = get_free_page();
> >	switch_to_new_stack(new_stack);
> >	ret = func(data);
> >	cleanup_stack(new_stack);
> >	return ret;
> >}
> 
> This would also need something to tell func() where its current_thread_info 
>  is now at.

That'd be handled in the usual way by switch_to_new_stack. That is,
we'd store the location of the old stack at the top of the new stack
and then literally change everything to point to the new stack.

> Which might not be much of a problem. Can't think of much else 
> either but it's the kind of thing you'd _like_ to be a problem just to have 
> an excuse to shoot down an icky notion like that...

It's not any ickier than explicitly calling schedule().

> Would you intend this just as a "make this path work until we fix it 
> properly" kind of thing?

Maybe.

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ