[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18076.54789.359986.788880@stoffel.org>
Date: Tue, 17 Jul 2007 10:45:25 -0400
From: "John Stoffel" <john@...ffel.org>
To: utz lehmann <lkml123@...4n2c.de>
Cc: Rene Herman <rene.herman@...il.com>, Bodo Eggert <7eggert@....de>,
Matt Mackall <mpm@...enic.com>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Jesper Juhl <jesper.juhl@...il.com>,
Ray Lee <ray-lk@...rabbit.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
William Lee Irwin III <wli@...omorphy.com>
Subject: Re: [PATCH][RFC] 4K stacks default, not a debug thing any more...?
utz> On Tue, 2007-07-17 at 00:28 +0200, Rene Herman wrote:
>> Given that as Arjan stated Fedora and even RHEL have been using 4K stacks
>> for some time now, and certainly the latter being a distribution which I
>> would expect to both host a relatively large number of lvm/md/xfs and what
>> stackeaters have you users and to be fairly conservative with respect to the
>> chances of scribbling over kernel memory (I'm a trusting person...) it seems
>> there might at this stage only be very few offenders left.
utz> I have to recompile the fedora kernel rpms (fc6, f7) with 8k
utz> stacks on my i686 server. It's using NFS -> XFS -> DM -> MD
utz> (raid1) -> IDE disks. With 4k stacks it crash (hang) within
utz> minutes after using NFS. With 8k stacks it's rock solid. No
utz> crashes within months.
Does it give any useful information when it does crash? Can you make
a simple test case using ram disks instead of IDE disks and then
building upon that?
I think I should try to do this myself at some point...
John
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists