[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220701022947.10716-3-xiubli@redhat.com>
Date: Fri, 1 Jul 2022 10:29:47 +0800
From: xiubli@...hat.com
To: jlayton@...nel.org, idryomov@...il.com, dhowells@...hat.com
Cc: vshankar@...hat.com, linux-kernel@...r.kernel.org,
ceph-devel@...r.kernel.org, willy@...radead.org,
keescook@...omium.org, linux-fsdevel@...r.kernel.org,
linux-cachefs@...hat.com, Xiubo Li <xiubli@...hat.com>
Subject: [PATCH 2/2] ceph: do not release the folio lock in kceph
From: Xiubo Li <xiubli@...hat.com>
The netfs layer should be responsible to unlock and put the folio,
and we will always return 0 when succeeds.
URL: https://tracker.ceph.com/issues/56423
Signed-off-by: Xiubo Li <xiubli@...hat.com>
---
fs/ceph/addr.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index fe6147f20dee..3ef5200e2005 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1310,16 +1310,16 @@ static int ceph_netfs_check_write_begin(struct file *file, loff_t pos, unsigned
if (snapc) {
int r;
- folio_unlock(folio);
- folio_put(folio);
if (IS_ERR(snapc))
return PTR_ERR(snapc);
+ folio_unlock(folio);
ceph_queue_writeback(inode);
r = wait_event_killable(ci->i_cap_wq,
context_is_writeable_or_written(inode, snapc));
ceph_put_snap_context(snapc);
- return r == 0 ? -EAGAIN : r;
+ folio_lock(folio);
+ return r == 0 ? -EAGAIN : 0;
}
return 0;
}
--
2.36.0.rc1
Powered by blists - more mailing lists