[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070614150417.c73fb6b9.akpm@linux-foundation.org>
Date: Thu, 14 Jun 2007 15:04:17 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Christoph Lameter <clameter@....com>
Cc: linux-kernel@...r.kernel.org, hch@...radead.org
Subject: Re: [patch 00/14] Page cache cleanup in anticipation of Large
Blocksize support
> On Thu, 14 Jun 2007 14:37:33 -0700 (PDT) Christoph Lameter <clameter@....com> wrote:
> On Thu, 14 Jun 2007, Andrew Morton wrote:
>
> > We want the 100% case.
>
> Yes that is what we intend to do. Universal support for larger blocksize.
> I.e. your desktop filesystem will use 64k page size and server platforms
> likely much larger.
With 64k pagesize the amount of memory required to hold a kernel tree (say)
will go from 270MB to 1400MB. This is not an optimisation.
Several 64k pagesize people have already spent time looking at various
tail-packing schemes to get around this serious problem. And that's on
_server_ class machines. Large ones. I don't think
laptop/desktop/samll-server machines would want to go anywhere near this.
> fsck times etc etc are becoming an issue for desktop
> systems
I don't see what fsck has to do with it.
fsck is single-threaded (hence no locking issues) and operates against the
blockdev pagecache and does a _lot_ of small reads (indirect blocks,
especially). If the memory consumption for each 4k read jumps to 64k, fsck
is likely to slow down due to performing a lot more additional IO and due
to entering page reclaim much earlier.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists