[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190515192715.18000-29-vgoyal@redhat.com>
Date: Wed, 15 May 2019 15:27:13 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, linux-nvdimm@...ts.01.org
Cc: vgoyal@...hat.com, miklos@...redi.hu, stefanha@...hat.com,
dgilbert@...hat.com, swhiteho@...hat.com
Subject: [PATCH v2 28/30] fuse: Reschedule dax free work if too many EAGAIN attempts
fuse_dax_free_memory() can be very cpu intensive in corner cases. For example,
if one inode has consumed all the memory and a setupmapping request is
pending, that means inode lock is held by request and worker thread will
not get lock for a while. And given there is only one inode consuming all
the dax ranges, all the attempts to acquire lock will fail.
So if there are too many inode lock failures (-EAGAIN), reschedule the
worker with a 10ms delay.
Signed-off-by: Vivek Goyal <vgoyal@...hat.com>
---
fs/fuse/file.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index b0293a308b5e..9b82d9b4ebc3 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -4047,7 +4047,7 @@ int fuse_dax_free_one_mapping(struct fuse_conn *fc, struct inode *inode,
int fuse_dax_free_memory(struct fuse_conn *fc, unsigned long nr_to_free)
{
struct fuse_dax_mapping *dmap, *pos, *temp;
- int ret, nr_freed = 0;
+ int ret, nr_freed = 0, nr_eagain = 0;
u64 dmap_start = 0, window_offset = 0;
struct inode *inode = NULL;
@@ -4056,6 +4056,12 @@ int fuse_dax_free_memory(struct fuse_conn *fc, unsigned long nr_to_free)
if (nr_freed >= nr_to_free)
break;
+ if (nr_eagain > 20) {
+ queue_delayed_work(system_long_wq, &fc->dax_free_work,
+ msecs_to_jiffies(10));
+ return 0;
+ }
+
dmap = NULL;
spin_lock(&fc->lock);
@@ -4093,8 +4099,10 @@ int fuse_dax_free_memory(struct fuse_conn *fc, unsigned long nr_to_free)
}
/* Could not get inode lock. Try next element */
- if (ret == -EAGAIN)
+ if (ret == -EAGAIN) {
+ nr_eagain++;
continue;
+ }
nr_freed++;
}
return 0;
--
2.20.1
Powered by blists - more mailing lists