lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 26 May 2022 20:23:39 -0700
From:   Gregory Farnum <gfarnum@...hat.com>
To:     Xiubo Li <xiubli@...hat.com>
Cc:     Jeff Layton <jlayton@...nel.org>,
        Luís Henriques <lhenriques@...e.de>,
        Ilya Dryomov <idryomov@...il.com>,
        ceph-devel <ceph-devel@...r.kernel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2] ceph: prevent a client from exceeding the MDS
 maximum xattr size

On Thu, May 26, 2022 at 6:10 PM Xiubo Li <xiubli@...hat.com> wrote:
>
>
> On 5/27/22 8:44 AM, Jeff Layton wrote:
> > On Fri, 2022-05-27 at 08:36 +0800, Xiubo Li wrote:
> >> On 5/27/22 2:39 AM, Jeff Layton wrote:
> >>> A question:
> >>>
> >>> How do the MDS's discover this setting? Do they get it from the mons? If
> >>> so, I wonder if there is a way for the clients to query the mon for this
> >>> instead of having to extend the MDS protocol?
> >> It sounds like what the "max_file_size" does, which will be recorded in
> >> the 'mdsmap'.
> >>
> >> While currently the "max_xattr_pairs_size" is one MDS's option for each
> >> daemon and could set different values for each MDS.
> >>
> >>
> > Right, but the MDS's in general don't use local config files. Where are
> > these settings stored? Could the client (potentially) query for them?
>
> AFAIK, each process in ceph it will have its own copy of the
> "CephContext". I don't know how to query all of them but I know there
> have some API such as "rados_conf_set/get" could do similar things.
>
> Not sure whether will it work in our case.
>
> >
> > I'm pretty sure the client does fetch and parse the mdsmap. If it's
> > there then it could grab the setting for all of the MDS's at mount time
> > and settle on the lowest one.
> >
> > I think a solution like that might be more resilient than having to
> > fiddle with feature bits and such...
>
> Yeah, IMO just making this option to be like the "max_file_size" is more
> appropriate.

Makes sense to me — this is really a property of the filesystem, not a
daemon, so it should be propagated through common filesystem state.
I guess Luis' https://github.com/ceph/ceph/pull/46357 should be
updated to do it that way? I see some discussion there about handling
old clients which don't recognize these limits as well.
-Greg

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ