[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOi1vP85hF6qbch-mpY+NN5bS4p_ta=WG_b=cfKNOJLD9CrNag@mail.gmail.com>
Date: Wed, 27 Sep 2023 12:53:42 +0200
From: Ilya Dryomov <idryomov@...il.com>
To: Max Kellermann <max.kellermann@...os.com>
Cc: Xiubo Li <xiubli@...hat.com>, Jeff Layton <jlayton@...nel.org>,
ceph-devel@...r.kernel.org, linux-kernel@...r.kernel.org,
Venky Shankar <vshankar@...hat.com>,
Gregory Farnum <gfarnum@...hat.com>
Subject: Re: [PATCH 1/2] fs/ceph/debugfs: make all files world-readable
On Tue, Sep 26, 2023 at 11:09 AM Max Kellermann
<max.kellermann@...os.com> wrote:
>
> On Tue, Sep 26, 2023 at 10:46 AM Ilya Dryomov <idryomov@...il.com> wrote:
> > No, "ceph" command (as well as "rbd", "rados", etc) can be run from
> > anywhere -- it's just a matter of installing a package which is likely
> > already installed unless you are mounting CephFS manually without using
> > /sbin/mount.ceph mount helper.
>
> I have never heard of that helper, so no, we're not using it - should we?
If you have figured out the right mount options, you might as well not.
The helper does things like determine whether v1 or v2 addresses should
be used, fetch the key and pass it via the kernel keyring (whereas you
are probably passing it verbatim on the command line), etc. It's the
same syscall in the end, so the helper is certainly not required.
>
> This "ceph" tool requires installing 90 MB of additional Debian
> packages, which I just tried on a test cluster, and "ceph fs top"
> fails with "Error initializing cluster client: ObjectNotFound('RADOS
> object not found (error calling conf_read_file)')". Okay, so I have to
> configure something.... but .... I don't get why I would want to do
> that, when I can get the same information from the kernel without
> installing or configuring anything. This sounds like overcomplexifying
> the thing for no reason.
I have relayed my understanding of this feature (or rather how it was
presented to me). I see where you are coming from, so adding more
CephFS folks to chime in.
Thanks,
Ilya
Powered by blists - more mailing lists