lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 17 Jul 2007 15:46:07 +0200
From:	Rene Herman <rene.herman@...il.com>
To:	Matt Mackall <mpm@...enic.com>
CC:	Jeremy Fitzhardinge <jeremy@...p.org>,
	Jesper Juhl <jesper.juhl@...il.com>,
	Ray Lee <ray-lk@...rabbit.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	William Lee Irwin III <wli@...omorphy.com>,
	David Chinner <dgc@....com>
Subject: Re: [PATCH][RFC] 4K stacks default, not a debug thing any more...?

On 07/17/2007 01:38 AM, Matt Mackall wrote:

> On Sun, Jul 15, 2007 at 12:19:15AM +0200, Rene Herman wrote:

>> Quite. Ofcourse, saying "our stacks are 1 page" would be the by far
>> easiest solution to that. Personally, I've been running with 4K stacks
>> exclusively on a variety of machines for quite some time now, but I
>> can't say I'm all too adventurous with respect to filesystems
>> (especially) so I'm not sure how many problems remain with 4K stacks. I
>> did recently see Andrew Morton say that problems _do_ still exist. If
>> it's just XFS -- well, heck...
> 
> One long-standing problem is DM/LVM. That -may- be fixed now, but I
> suspect issues remain.

Three cases were reported again in this thread alone yes. Problems do seem 
to be nicely isolated to that specific issue...

>>> int growstack(int headroom, int func, void *data)
>>> {

[ ... ]

>>> }

>> This would also need something to tell func() where its current_thread_info 
>>  is now at.
> 
> That'd be handled in the usual way by switch_to_new_stack. That is,
> we'd store the location of the old stack at the top of the new stack
> and then literally change everything to point to the new stack.

I might not understand what you're saying but I don't believe that would do.
The current thread_info _itself_ (ie, the struct itself, not a pointer) is 
located at esp & ~(THREAD_SIZE - 1) meaning you'd either have to copy over 
the struct to the new stack, or forego that historic optimization (don't get 
me wrong, either may be okay).

>> Which might not be much of a problem. Can't think of much else 
>> either but it's the kind of thing you'd _like_ to be a problem just to have 
>> an excuse to shoot down an icky notion like that...
> 
> It's not any ickier than explicitly calling schedule().

Somewhat comparable in notion perhaps, but I disagree on the relative level 
of ickyness. Calling schedule() you do when you know you no longer have to 
hog te CPU and when you know it's safe to do so. Calling via growstack() 
looks to be a "ah, heck, let's err on the safe side since we don't have a 
bleedin' clue otherwise" sort of thing.

>> Would you intend this just as a "make this path work until we fix it 
>> properly" kind of thing?
> 
> Maybe.

If you know, _can_ MD/LVM (and/or XFS) in fact be sanely/timely fixed, or is 
this looking at something fundamental?

Rene.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ