[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20121130185211.261563985@linuxfoundation.org>
Date: Fri, 30 Nov 2012 10:56:04 -0800
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
alan@...rguk.ukuu.org.uk, Jan Kara <jack@...e.cz>,
OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>,
Al Viro <viro@...iv.linux.org.uk>,
Wu Fengguang <fengguang.wu@...el.com>,
Dave Chinner <david@...morbit.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [ 33/54] writeback: put unused inodes to LRU after writeback completion
3.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Kara <jack@...e.cz>
commit 4eff96dd5283a102e0c1cac95247090be74a38ed upstream.
Commit 169ebd90131b ("writeback: Avoid iput() from flusher thread")
removed iget-iput pair from inode writeback. As a side effect, inodes
that are dirty during iput_final() call won't be ever added to inode LRU
(iput_final() doesn't add dirty inodes to LRU and later when the inode
is cleaned there's noone to add the inode there). Thus inodes are
effectively unreclaimable until someone looks them up again.
The practical effect of this bug is limited by the fact that inodes are
pinned by a dentry for long enough that the inode gets cleaned. But
still the bug can have nasty consequences leading up to OOM conditions
under certain circumstances. Following can easily reproduce the
problem:
for (( i = 0; i < 1000; i++ )); do
mkdir $i
for (( j = 0; j < 1000; j++ )); do
touch $i/$j
echo 2 > /proc/sys/vm/drop_caches
done
done
then one needs to run 'sync; ls -lR' to make inodes reclaimable again.
We fix the issue by inserting unused clean inodes into the LRU after
writeback finishes in inode_sync_complete().
Signed-off-by: Jan Kara <jack@...e.cz>
Reported-by: OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
Cc: Al Viro <viro@...iv.linux.org.uk>
Cc: OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
Cc: Wu Fengguang <fengguang.wu@...el.com>
Cc: Dave Chinner <david@...morbit.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
fs/fs-writeback.c | 2 ++
fs/inode.c | 16 ++++++++++++++--
fs/internal.h | 1 +
3 files changed, 17 insertions(+), 2 deletions(-)
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -228,6 +228,8 @@ static void requeue_io(struct inode *ino
static void inode_sync_complete(struct inode *inode)
{
inode->i_state &= ~I_SYNC;
+ /* If inode is clean an unused, put it into LRU now... */
+ inode_add_lru(inode);
/* Waiters must see I_SYNC cleared before being woken up */
smp_mb();
wake_up_bit(&inode->i_state, __I_SYNC);
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -408,6 +408,19 @@ static void inode_lru_list_add(struct in
spin_unlock(&inode->i_sb->s_inode_lru_lock);
}
+/*
+ * Add inode to LRU if needed (inode is unused and clean).
+ *
+ * Needs inode->i_lock held.
+ */
+void inode_add_lru(struct inode *inode)
+{
+ if (!(inode->i_state & (I_DIRTY | I_SYNC | I_FREEING | I_WILL_FREE)) &&
+ !atomic_read(&inode->i_count) && inode->i_sb->s_flags & MS_ACTIVE)
+ inode_lru_list_add(inode);
+}
+
+
static void inode_lru_list_del(struct inode *inode)
{
spin_lock(&inode->i_sb->s_inode_lru_lock);
@@ -1390,8 +1403,7 @@ static void iput_final(struct inode *ino
if (!drop && (sb->s_flags & MS_ACTIVE)) {
inode->i_state |= I_REFERENCED;
- if (!(inode->i_state & (I_DIRTY|I_SYNC)))
- inode_lru_list_add(inode);
+ inode_add_lru(inode);
spin_unlock(&inode->i_lock);
return;
}
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -110,6 +110,7 @@ extern int open_check_o_direct(struct fi
* inode.c
*/
extern spinlock_t inode_sb_list_lock;
+extern void inode_add_lru(struct inode *inode);
/*
* fs-writeback.c
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists