[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4564C28B.30604@redhat.com>
Date: Wed, 22 Nov 2006 16:35:07 -0500
From: Wendy Cheng <wcheng@...hat.com>
To: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: [PATCH] prune_icache_sb
There seems to have a need to prune inode cache entries for specific
mount points (per vfs superblock) due to performance issues found after
some io intensive commands ("rsyn" for example). The problem is
particularly serious for one of our kernel modules where it caches its
(cluster) locks based on vfs inode implementation. These locks are
created by inode creation call and get purged when s_op->clear_inode()
is invoked. With larger servers that equipped with plenty of memory, the
page dirty ratio may not pass the threshold to trigger VM reclaim logic
but the accumulated inode counts (and its associated cluster locks)
could causes unacceptable performance degradation for latency sensitive
applications.
After adding the uploaded inode trimming patch, together with
shrink_dcache_sb(), we are able to keep the latency for one real world
application within a satisfactory bound (consistently stayed within 5
seconds, compared to the original fluctuation between 5 to 16 seconds).
The calls are placed in one of our kernel daemons that wakes up in a
tunable interval to do the trimming work as shown in the following code
segment. Would appreciate if this patch can get accepted into mainline
kernel.
i_percent = sdp->sd_tune.gt_inoded_purge;
if (i_percent) {
if (i_percent > 100) i_percent = 100;
a_count = atomic_read(&sdp->sd_inode_count);
i_count = a_count * i_percent / 100;
(void) shrink_dcache_sb(sdp->sd_vfs);
(void) prune_icache_sb(i_count, sdp->sd_vfs);
}
-- Wendy
View attachment "inode_prune_sb.patch" of type "text/x-patch" (2145 bytes)
Powered by blists - more mailing lists