lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87leunwd5z.fsf@brahms.olymp>
Date:   Fri, 27 May 2022 10:14:00 +0100
From:   Luís Henriques <lhenriques@...e.de>
To:     Gregory Farnum <gfarnum@...hat.com>
Cc:     Xiubo Li <xiubli@...hat.com>, Jeff Layton <jlayton@...nel.org>,
        Ilya Dryomov <idryomov@...il.com>,
        ceph-devel <ceph-devel@...r.kernel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2] ceph: prevent a client from exceeding the MDS
 maximum xattr size

Gregory Farnum <gfarnum@...hat.com> writes:

> On Thu, May 26, 2022 at 6:10 PM Xiubo Li <xiubli@...hat.com> wrote:
>>
>>
>> On 5/27/22 8:44 AM, Jeff Layton wrote:
>> > On Fri, 2022-05-27 at 08:36 +0800, Xiubo Li wrote:
>> >> On 5/27/22 2:39 AM, Jeff Layton wrote:
>> >>> A question:
>> >>>
>> >>> How do the MDS's discover this setting? Do they get it from the mons? If
>> >>> so, I wonder if there is a way for the clients to query the mon for this
>> >>> instead of having to extend the MDS protocol?
>> >> It sounds like what the "max_file_size" does, which will be recorded in
>> >> the 'mdsmap'.
>> >>
>> >> While currently the "max_xattr_pairs_size" is one MDS's option for each
>> >> daemon and could set different values for each MDS.
>> >>
>> >>
>> > Right, but the MDS's in general don't use local config files. Where are
>> > these settings stored? Could the client (potentially) query for them?
>>
>> AFAIK, each process in ceph it will have its own copy of the
>> "CephContext". I don't know how to query all of them but I know there
>> have some API such as "rados_conf_set/get" could do similar things.
>>
>> Not sure whether will it work in our case.
>>
>> >
>> > I'm pretty sure the client does fetch and parse the mdsmap. If it's
>> > there then it could grab the setting for all of the MDS's at mount time
>> > and settle on the lowest one.
>> >
>> > I think a solution like that might be more resilient than having to
>> > fiddle with feature bits and such...
>>
>> Yeah, IMO just making this option to be like the "max_file_size" is more
>> appropriate.
>
> Makes sense to me — this is really a property of the filesystem, not a
> daemon, so it should be propagated through common filesystem state.

Right now the max_xattr_pairs_size seems to be something that can be set
on each MDS, so definitely not a filesystem property.  To be honest, I
think it's nasty to have this knob in the first place because it will
allow an admin to set it to a value that will allow clients to blowup the
MDS cluster.

> I guess Luis' https://github.com/ceph/ceph/pull/46357 should be
> updated to do it that way?

Just to confirm, by "to do it that way" you mean to move that setting into
the mdsmap, right?

> I see some discussion there about handling
> old clients which don't recognize these limits as well.

Yeah, this is where the feature bit came from.  This would allow old
clients to be identified so that the MDS would not give them 'Xx'
capabilities.  Old clients would be able to set xattrs but not to buffer
them, i.e. they'd be forced to do the SETXATTR synchronously.

Cheers,
-- 
Luís

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ