[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1426016724-23912-10-git-send-email-jbacik@fb.com>
Date: Tue, 10 Mar 2015 15:45:24 -0400
From: Josef Bacik <jbacik@...com>
To: <linux-fsdevel@...r.kernel.org>, <david@...morbit.com>,
<viro@...iv.linux.org.uk>, <jack@...e.cz>,
<linux-kernel@...r.kernel.org>
Subject: [PATCH 9/9] inode: don't softlockup when evicting inodes
On a box with a lot of ram (148gb) I can make the box softlockup after running
an fs_mark job that creates hundreds of millions of empty files. This is
because we never generate enough memory pressure to keep the number of inodes on
our unused list low, so when we go to unmount we have to evict ~100 million
inodes. This makes one processor a very unhappy person, so add a cond_resched()
in dispose_list() and cond_resched_lock() in the eviction isolation function to
combat this. Thanks,
Signed-off-by: Josef Bacik <jbacik@...com>
---
fs/inode.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/fs/inode.c b/fs/inode.c
index 17da8801..2191a3ce 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -569,6 +569,7 @@ static void dispose_list(struct list_head *head)
list_del_init(&inode->i_lru);
evict(inode);
+ cond_resched();
}
}
@@ -599,6 +600,13 @@ __evict_inodes_isolate(struct list_head *item, struct list_lru_one *lru,
list_lru_isolate(lru, item);
spin_unlock(&inode->i_lock);
+
+ /*
+ * We can have a ton of inodes to evict at unmount time, check to see if
+ * we need to go to sleep for a bit so we don't livelock.
+ */
+ if (cond_resched_lock(lock))
+ return LRU_REMOVED_RETRY;
return LRU_REMOVED;
}
--
1.9.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists