[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48FCE1C4.20807@linux-foundation.org>
Date: Mon, 20 Oct 2008 14:53:40 -0500
From: Christoph Lameter <cl@...ux-foundation.org>
To: Miklos Szeredi <miklos@...redi.hu>
CC: penberg@...helsinki.fi, nickpiggin@...oo.com.au, hugh@...itas.com,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org
Subject: Re: SLUB defrag pull request?
Miklos Szeredi wrote:
> So, isn't it possible to do without get_dentries()? What's the
> fundamental difference between this and regular cache shrinking?
The fundamental difference is that slab defrag operates on sparsely populated
dentries. It comes into effect when the density of dentries per page is low
and lots of memory is wasted. It defragments by kicking out dentries in low
density pages. These can then be reclaimed.
> Case below was brainfart, please ignore. But that doesn't really
> help: the VFS assumes that you cannot umount while there are busy
> dentries/inodes. Usually it works this way: VFS first gets vfsmount
> ref, then gets dentry ref, and releases them in the opposite order.
> And umount is not allowed if vfsmount has a non-zero refcount (it's a
> bit more complicated, but the essense is the same).
The dentries that we get a ref on are candidates for removal. Their lifetime
is limited. Unmounting while we are trying to remove dentries/inodes results
in two mechanisms removing dentries/inodes.
If we have obtained a reference then invalidate_list() will return the number
of busy inodes which would trigger the printk in generic_shutdown_super(). But
these are inodes currently being reclaimed by slab defrag. Just waiting a bit
would remedy the situation.
We would need some way to make generic_shutdown_super() wait until slab defrag
is finished.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists