[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220426161204.17896-1-lhenriques@suse.de>
Date: Tue, 26 Apr 2022 17:12:04 +0100
From: Luís Henriques <lhenriques@...e.de>
To: Jeff Layton <jlayton@...nel.org>, Xiubo Li <xiubli@...hat.com>,
Ilya Dryomov <idryomov@...il.com>
Cc: ceph-devel@...r.kernel.org, linux-kernel@...r.kernel.org,
Luís Henriques <lhenriques@...e.de>,
Ryan Taylor <rptaylor@...c.ca>
Subject: [PATCH] ceph: fix statfs for subdir mounts
When doing a mount using as base a directory that has 'max_bytes' quotas
statfs uses that value as the total; if a subdirectory is used instead,
the same 'max_bytes' too in statfs, unless there is another quota set.
Unfortunately, if this subdirectory only has the 'max_files' quota set,
then statfs uses the filesystem total. Fix this by making sure we only
lookup realms that contain the 'max_bytes' quota.
Link: https://tracker.ceph.com/issues/55090
Cc: Ryan Taylor <rptaylor@...c.ca>
Signed-off-by: Luís Henriques <lhenriques@...e.de>
---
Hi!
Unfortunately, I don't think this is the real fix for the bug reported in
the tracker (or by Ryan on the mailing-list). I haven't seen any
reference to 'max_files' so I suspect Ryan and Dan are hitting another
bug. This can be easily checked by 'getfattr -n ceph.quota <subdir>'.
Also, Dan (in the tracker) states the bug is on the kernel client only but
I can also see if in the fuse client.
Anyway, this patch fixes a real bug.
fs/ceph/quota.c | 35 +++++++++++++++++++++++++++--------
1 file changed, 27 insertions(+), 8 deletions(-)
diff --git a/fs/ceph/quota.c b/fs/ceph/quota.c
index a338a3ec0dc4..235a8d06a8ee 100644
--- a/fs/ceph/quota.c
+++ b/fs/ceph/quota.c
@@ -193,11 +193,17 @@ void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc)
mutex_unlock(&mdsc->quotarealms_inodes_mutex);
}
+enum quota_get_realm {
+ QUOTA_GET_MAX_FILES,
+ QUOTA_GET_MAX_BYTES,
+ QUOTA_GET_ANY
+};
+
/*
* This function walks through the snaprealm for an inode and returns the
- * ceph_snap_realm for the first snaprealm that has quotas set (either max_files
- * or max_bytes). If the root is reached, return the root ceph_snap_realm
- * instead.
+ * ceph_snap_realm for the first snaprealm that has quotas set (max_files,
+ * max_bytes, or any, depending on the 'which_quota' argument). If the root is
+ * reached, return the root ceph_snap_realm instead.
*
* Note that the caller is responsible for calling ceph_put_snap_realm() on the
* returned realm.
@@ -209,7 +215,9 @@ void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc)
* will be restarted.
*/
static struct ceph_snap_realm *get_quota_realm(struct ceph_mds_client *mdsc,
- struct inode *inode, bool retry)
+ struct inode *inode,
+ enum quota_get_realm which_quota,
+ bool retry)
{
struct ceph_inode_info *ci = NULL;
struct ceph_snap_realm *realm, *next;
@@ -248,7 +256,17 @@ static struct ceph_snap_realm *get_quota_realm(struct ceph_mds_client *mdsc,
}
ci = ceph_inode(in);
- has_quota = __ceph_has_any_quota(ci);
+ switch (which_quota) {
+ case QUOTA_GET_MAX_BYTES:
+ has_quota = ci->i_max_bytes;
+ break;
+ case QUOTA_GET_MAX_FILES:
+ has_quota = ci->i_max_files;
+ break;
+ default: /* QUOTA_GET_ANY */
+ has_quota = __ceph_has_any_quota(ci);
+ break;
+ }
iput(in);
next = realm->parent;
@@ -279,8 +297,8 @@ bool ceph_quota_is_same_realm(struct inode *old, struct inode *new)
* dropped and we can then restart the whole operation.
*/
down_read(&mdsc->snap_rwsem);
- old_realm = get_quota_realm(mdsc, old, true);
- new_realm = get_quota_realm(mdsc, new, false);
+ old_realm = get_quota_realm(mdsc, old, QUOTA_GET_ANY, true);
+ new_realm = get_quota_realm(mdsc, new, QUOTA_GET_ANY, false);
if (PTR_ERR(new_realm) == -EAGAIN) {
up_read(&mdsc->snap_rwsem);
if (old_realm)
@@ -483,7 +501,8 @@ bool ceph_quota_update_statfs(struct ceph_fs_client *fsc, struct kstatfs *buf)
bool is_updated = false;
down_read(&mdsc->snap_rwsem);
- realm = get_quota_realm(mdsc, d_inode(fsc->sb->s_root), true);
+ realm = get_quota_realm(mdsc, d_inode(fsc->sb->s_root),
+ QUOTA_GET_MAX_BYTES, true);
up_read(&mdsc->snap_rwsem);
if (!realm)
return false;
Powered by blists - more mailing lists