[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070911201928.GA20688@lazybastard.org>
Date: Tue, 11 Sep 2007 22:19:29 +0200
From: Jörn Engel <joern@...fs.org>
To: Andrea Arcangeli <andrea@...e.de>
Cc: Mel Gorman <mel@....ul.ie>, Christoph Lameter <clameter@....com>,
torvalds@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, Christoph Hellwig <hch@....de>,
Mel Gorman <mel@...net.ie>,
William Lee Irwin III <wli@...omorphy.com>,
David Chinner <dgc@....com>,
Jens Axboe <jens.axboe@...cle.com>,
Badari Pulavarty <pbadari@...il.com>,
Maxim Levitsky <maximlevitsky@...il.com>,
Fengguang Wu <fengguang.wu@...il.com>,
swin wang <wangswin@...il.com>, totty.lu@...il.com,
hugh@...itas.com
Subject: Re: [00/41] Large Blocksize Support V7 (adds memmap support)
Odd. I keep arguing against the solution I prefer.
On Tue, 11 September 2007 21:20:52 +0200, Andrea Arcangeli wrote:
>
> The the problem with the slub fragmentation isn't a new problem, it
> happens in today kernels as well and at least the slab by design is
> meant to _defrag_ internally. So it's practically already solved and
> it provides some guarantee unlike the buddy allocator.
Slab defrag doesn't look like a solved problem. Basically, slab
allocator was designed to group similar objects together. Main reason
in this context is that similar objects have similar lifetimes. And it
is true that one dentry's lifetime is more likely to match another one's
that, say, a struct bio's.
But different dentries still have vastly different lifetimes. And with
that, fragmentation will continue to occur. So the problem is not
solved. It is a hell of a lot better than pre-slab days, just not
perfect.
> What I think you're missing is that for Nick's worst case to trigger
> with the config_page_shift design, you would need the _whole_ ram to
> be _at_least_once_ allocated completely in kernel stacks. If the whole
> 100% of ram wouldn't go allocated in slub as a pure kernel stack, such
> a scenario could never materialize.
Things get somewhat worse with multiple attack vectors (whether
malicious or accidental). Spending 20% of ram on each of {kernel
stacks, dentries, inodes, mlocked pages, size-XXX} would be sufficient.
The system can spend 20% on kernel stacks with 80% free, then spend 20%
on dentries with 60% free and 20% wasted in almost-free kernel stack
slabs, etc.
To argue in favor, for a change, the exact same scenario would be
possible with Christoph's solution as well. It would even be more
likely. Where in your case 20% of all memory has to go to each slab
cache at one time, only one page per largepage of that would be
necessary in Christophs case. The rest could be allocated for other
purposes.
So overall I prefer your approach, for whatever my two cents of armchair
oppinion are worth.
Jörn
--
I've never met a human being who would want to read 17,000 pages of
documentation, and if there was, I'd kill him to get him out of the
gene pool.
-- Joseph Costello
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists