lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 26 Sep 2023 10:45:49 +0200
From:   Ilya Dryomov <idryomov@...il.com>
To:     Max Kellermann <max.kellermann@...os.com>
Cc:     Xiubo Li <xiubli@...hat.com>, Jeff Layton <jlayton@...nel.org>,
        ceph-devel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] fs/ceph/debugfs: make all files world-readable

On Tue, Sep 26, 2023 at 8:16 AM Max Kellermann <max.kellermann@...os.com> wrote:
>
> On Mon, Sep 25, 2023 at 12:42 PM Ilya Dryomov <idryomov@...il.com> wrote:
> > A word of caution about building metrics collectors based on debugfs
> > output: there are no stability guarantees.  While the format won't be
> > changed just for the sake of change of course, expect zero effort to
> > preserve backwards compatibility.
>
> Agree, but there's nothing else. We have been using my patch for quite
> some time, and it has been very useful.
>
> Maybe we can discuss promoting these statistics to sysfs/proc? (the
> raw numbers, not the existing aggregates which are useless for any
> practical purpose)
>
> > The latency metrics in particular are sent to the MDS in binary form
> > and are intended to be consumed through commands like "ceph fs top".
> > debugfs stuff is there just for an occasional sneak peek (apart from
> > actual debugging).
>
> I don't know the whole Ceph ecosystem so well, but "ceph" is a command
> that is supposed to run on a Ceph server, and not on a machine that
> mounts a cephfs, right? If that's right, then this command is useless
> for me.

No, "ceph" command (as well as "rbd", "rados", etc) can be run from
anywhere -- it's just a matter of installing a package which is likely
already installed unless you are mounting CephFS manually without using
/sbin/mount.ceph mount helper.

Thanks,

                Ilya

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ