lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YmkCFeymZjoCZjQH@suse.de>
Date:   Wed, 27 Apr 2022 09:43:01 +0100
From:   Luís Henriques <lhenriques@...e.de>
To:     Xiubo Li <xiubli@...hat.com>
Cc:     Jeff Layton <jlayton@...nel.org>,
        Ilya Dryomov <idryomov@...il.com>,
        Ceph Development <ceph-devel@...r.kernel.org>,
        linux-kernel@...r.kernel.org, rptaylor@...c.ca
Subject: Re: [PATCH] ceph: fix statfs for subdir mounts

On Wed, Apr 27, 2022 at 01:54:08PM +0800, Xiubo Li wrote:
> 
> On 4/27/22 12:12 AM, Luís Henriques wrote:
> > When doing a mount using as base a directory that has 'max_bytes' quotas
> > statfs uses that value as the total; if a subdirectory is used instead,
> > the same 'max_bytes' too in statfs, unless there is another quota set.
> > 
> > Unfortunately, if this subdirectory only has the 'max_files' quota set,
> > then statfs uses the filesystem total.  Fix this by making sure we only
> > lookup realms that contain the 'max_bytes' quota.
> > 
> > Link: https://tracker.ceph.com/issues/55090
> > Cc: Ryan Taylor <rptaylor@...c.ca>
> > Signed-off-by: Luís Henriques <lhenriques@...e.de>
> > ---
> > Hi!
> > 
> > Unfortunately, I don't think this is the real fix for the bug reported in
> > the tracker (or by Ryan on the mailing-list).  I haven't seen any
> > reference to 'max_files' so I suspect Ryan and Dan are hitting another
> > bug.  This can be easily checked by 'getfattr -n ceph.quota <subdir>'.
> > 
> > Also, Dan (in the tracker) states the bug is on the kernel client only but
> > I can also see if in the fuse client.
> > 
> > Anyway, this patch fixes a real bug.
> > 
> >   fs/ceph/quota.c | 35 +++++++++++++++++++++++++++--------
> >   1 file changed, 27 insertions(+), 8 deletions(-)
> > 
> > diff --git a/fs/ceph/quota.c b/fs/ceph/quota.c
> > index a338a3ec0dc4..235a8d06a8ee 100644
> > --- a/fs/ceph/quota.c
> > +++ b/fs/ceph/quota.c
> > @@ -193,11 +193,17 @@ void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc)
> >   	mutex_unlock(&mdsc->quotarealms_inodes_mutex);
> >   }
> > +enum quota_get_realm {
> > +	QUOTA_GET_MAX_FILES,
> > +	QUOTA_GET_MAX_BYTES,
> > +	QUOTA_GET_ANY
> > +};
> > +
> >   /*
> >    * This function walks through the snaprealm for an inode and returns the
> > - * ceph_snap_realm for the first snaprealm that has quotas set (either max_files
> > - * or max_bytes).  If the root is reached, return the root ceph_snap_realm
> > - * instead.
> > + * ceph_snap_realm for the first snaprealm that has quotas set (max_files,
> > + * max_bytes, or any, depending on the 'which_quota' argument).  If the root is
> > + * reached, return the root ceph_snap_realm instead.
> >    *
> >    * Note that the caller is responsible for calling ceph_put_snap_realm() on the
> >    * returned realm.
> > @@ -209,7 +215,9 @@ void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc)
> >    * will be restarted.
> >    */
> >   static struct ceph_snap_realm *get_quota_realm(struct ceph_mds_client *mdsc,
> > -					       struct inode *inode, bool retry)
> > +					       struct inode *inode,
> > +					       enum quota_get_realm which_quota,
> > +					       bool retry)
> >   {
> >   	struct ceph_inode_info *ci = NULL;
> >   	struct ceph_snap_realm *realm, *next;
> > @@ -248,7 +256,17 @@ static struct ceph_snap_realm *get_quota_realm(struct ceph_mds_client *mdsc,
> >   		}
> >   		ci = ceph_inode(in);
> > -		has_quota = __ceph_has_any_quota(ci);
> > +		switch (which_quota) {
> > +		case QUOTA_GET_MAX_BYTES:
> > +			has_quota = ci->i_max_bytes;
> > +			break;
> > +		case QUOTA_GET_MAX_FILES:
> > +			has_quota = ci->i_max_files;
> > +			break;
> > +		default: /* QUOTA_GET_ANY */
> > +			has_quota = __ceph_has_any_quota(ci);
> 
> How about moving this logic to __ceph_has_any_quota() by renaming it to
> __ceph_has_quota(ci, enum quota_get_realm which) ?
> 
> + static inline bool __ceph_has_quota(ci, enum quota_get_realm which) {
> +		switch (which) {
> +		case QUOTA_GET_MAX_BYTES:
> +			return !!ci->i_max_bytes;
> +			break;
> +		case QUOTA_GET_MAX_FILES:
> +			return !!ci->i_max_files;
> +			break;
> +		default: /* QUOTA_GET_ANY */
> +			return !!(ci->i_max_files || ci->i_max_bytes);
> + }
> 
> The other LGTM.

Thank you for the review, Xiubo.  Your suggestion makes sense, I'll send
out v2 shortly.

Cheers,
--
Luís


> 
> -- Xiubo
> 
> 
> > +			break;
> > +		}
> >   		iput(in);
> >   		next = realm->parent;
> > @@ -279,8 +297,8 @@ bool ceph_quota_is_same_realm(struct inode *old, struct inode *new)
> >   	 * dropped and we can then restart the whole operation.
> >   	 */
> >   	down_read(&mdsc->snap_rwsem);
> > -	old_realm = get_quota_realm(mdsc, old, true);
> > -	new_realm = get_quota_realm(mdsc, new, false);
> > +	old_realm = get_quota_realm(mdsc, old, QUOTA_GET_ANY, true);
> > +	new_realm = get_quota_realm(mdsc, new, QUOTA_GET_ANY, false);
> >   	if (PTR_ERR(new_realm) == -EAGAIN) {
> >   		up_read(&mdsc->snap_rwsem);
> >   		if (old_realm)
> > @@ -483,7 +501,8 @@ bool ceph_quota_update_statfs(struct ceph_fs_client *fsc, struct kstatfs *buf)
> >   	bool is_updated = false;
> >   	down_read(&mdsc->snap_rwsem);
> > -	realm = get_quota_realm(mdsc, d_inode(fsc->sb->s_root), true);
> > +	realm = get_quota_realm(mdsc, d_inode(fsc->sb->s_root),
> > +				QUOTA_GET_MAX_BYTES, true);
> >   	up_read(&mdsc->snap_rwsem);
> >   	if (!realm)
> >   		return false;
> > 
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ