lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 02 Apr 2020 09:38:20 +0800
From:   Ian Kent <raven@...maw.net>
To:     Miklos Szeredi <miklos@...redi.hu>,
        David Howells <dhowells@...hat.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Al Viro <viro@...iv.linux.org.uk>,
        Linux NFS list <linux-nfs@...r.kernel.org>,
        Andreas Dilger <adilger.kernel@...ger.ca>,
        Anna Schumaker <anna.schumaker@...app.com>,
        Theodore Ts'o <tytso@....edu>,
        Linux API <linux-api@...r.kernel.org>,
        linux-ext4@...r.kernel.org,
        Trond Myklebust <trond.myklebust@...merspace.com>,
        Miklos Szeredi <mszeredi@...hat.com>,
        Christian Brauner <christian@...uner.io>,
        Jann Horn <jannh@...gle.com>,
        "Darrick J. Wong" <darrick.wong@...cle.com>,
        Karel Zak <kzak@...hat.com>, Jeff Layton <jlayton@...hat.com>,
        linux-fsdevel@...r.kernel.org,
        LSM <linux-security-module@...r.kernel.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 00/13] VFS: Filesystem information [ver #19]

On Wed, 2020-04-01 at 10:37 +0200, Miklos Szeredi wrote:
> On Wed, Apr 1, 2020 at 10:27 AM David Howells <dhowells@...hat.com>
> wrote:
> > Miklos Szeredi <miklos@...redi.hu> wrote:
> > 
> > > According to dhowell's measurements processing 100k mounts would
> > > take
> > > about a few seconds of system time (that's the time spent by the
> > > kernel to retrieve the data,
> > 
> > But the inefficiency of mountfs - at least as currently implemented
> > - scales
> > up with the number of individual values you want to retrieve, both
> > in terms of
> > memory usage and time taken.
> 
> I've taken that into account when guesstimating a "few seconds per
> 100k entries".  My guess is that there's probably an order of
> magnitude difference between the performance of a fs based interface
> and a binary syscall based interface.  That could be reduced somewhat
> with a readfile(2) type API.
> 
> But the point is: this does not matter.  Whether it's .5s or 5s is
> completely irrelevant, as neither is going to take down the system,
> and userspace processing is probably going to take as much, if not
> more time.  And remember, we are talking about stopping and starting
> the automount daemon, which is something that happens, but it should
> not happen often by any measure.

Yes, but don't forget, I'm reporting what I saw when testing during
development.

>From previous discussion we know systemd (and probably the other apps
like udisks2, et. al.) gets notified on mount and umount activity so
its not going to be just starting and stopping autofs that's a problem
with very large mount tables.

To get a feel for the real difference we'd need to make the libmount
changes for both and then check between the two and check behaviour.
The mount and umount lookup case that Karel (and I) talked about
should be sufficient.

The biggest problem I had with fsinfo() when I was working with
earlier series was getting fs specific options, in particular the
need to use sb op ->fsinfo(). With this latest series David has made
that part of the generic code and your patch also cover it.

So the thing that was holding me up is done so we should be getting
on with libmount improvements, we need to settle this.

I prefer the system call interface and I'm not offering justification
for that other than a general dislike (and on occasion outright
frustration) of pretty much every proc implementation I have had to
look at.

> 
> > With fsinfo(), I've tried to batch values together where it makes
> > sense - and
> > there's no lingering memory overhead - no extra inodes, dentries
> > and files
> > required.
> 
> The dentries, inodes and files in your test are single use (except
> the
> root dentry) and can be made ephemeral if that turns out to be
> better.
> My guess is that dentries belonging to individual attributes should
> be
> deleted on final put, while the dentries belonging to the mount
> directory can be reclaimed normally.
> 
> Thanks,
> Miklos

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ