[<prev] [next>] [day] [month] [year] [list]
Message-Id: <201005241439.48373.manningc2@actrix.gen.nz>
Date: Mon, 24 May 2010 14:39:48 +1200
From: Charles Manning <manningc2@...rix.gen.nz>
To: linux-kernel@...r.kernel.org
Subject: Trying to use SLUB in an odd way
Hi
YAFFS uses an internal almost slub-like allocator and I've been looking at
moving it to use SLUB as part of an attempt to mainline yaffs.
In yaffs I create a lot of tiny objects which are allocated in blocks, like
slub, and then managed in a free list. Very slubbish so far, but mine is far
less intelligent than slub.
The difference though is that I can keep the objects for each mount separate
and can dump the whole lot on umount without individually freeing objects. I
just deallocate my whole cache.
There are two problems that I encountered in moving to slub:
1) I want to keep each mount point separate, but slub just hooks up with an
existing cache of the same size. I managed to trick slub into keeping yaffs
objects in their own cache by assigning a fake ctor. That stops the
combination (well at present anyway - could easy change in the future like if
ctor gets dropped). Like a VW Bug: ugly but it gets you there....
2) If I dump a cache with existing in-use objects then slub gets upset and
dumps warnings. I don't like the idea of just ignoring warnings. I also don't
want to manually tear down trees etc when the existing "just dump it"
approach is a lot faster. Pity there is no "trust me I know what I'm doing"
flag.
Questions:
A) Is there a better way to use slub to do this or is it better to just
continue with my manual allocator?
B) Is it worth adding flags to kmem_cache_create() to say:
a) Don't combine this slub with others.
b) "Trust me I know what I'm doing": Allow the cache to be dumped with
objects still allocated.
If (B) makes sense I'll put together a patch.
-- Charles
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists