[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YjudB7XARLlRtBiR@mit.edu>
Date: Wed, 23 Mar 2022 18:19:51 -0400
From: "Theodore Ts'o" <tytso@....edu>
To: Miklos Szeredi <miklos@...redi.hu>
Cc: Christian Brauner <brauner@...nel.org>,
Miklos Szeredi <mszeredi@...hat.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Linux API <linux-api@...r.kernel.org>,
linux-man <linux-man@...r.kernel.org>,
LSM <linux-security-module@...r.kernel.org>,
Karel Zak <kzak@...hat.com>, Ian Kent <raven@...maw.net>,
David Howells <dhowells@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>,
Christian Brauner <christian@...uner.io>,
Amir Goldstein <amir73il@...il.com>,
James Bottomley <James.Bottomley@...senpartnership.com>
Subject: Re: [RFC PATCH] getvalues(2) prototype
On Wed, Mar 23, 2022 at 02:24:40PM +0100, Miklos Szeredi wrote:
> The reason I stated thinking about this is that Amir wanted a per-sb
> iostat interface and dumped it into /proc/PID/mountstats. And that is
> definitely not the right way to go about this.
>
> So we could add a statfsx() and start filling in new stuff, and that's
> what Linus suggested. But then we might need to add stuff that is not
> representable in a flat structure (like for example the stuff that
> nfs_show_stats does) and that again needs new infrastructure.
>
> Another example is task info in /proc. Utilities are doing a crazy
> number of syscalls to get trivial information. Why don't we have a
> procx(2) syscall? I guess because lots of that is difficult to
> represent in a flat structure. Just take the lsof example: tt's doing
> hundreds of thousands of syscalls on a desktop computer with just a
> few hundred processes.
I'm still a bit puzzled about the reason for getvalues(2) beyond,
"reduce the number of system calls". Is this a performance argument?
If so, have you benchmarked lsof using this new interface?
I did a quickie run on my laptop, which currently had 444 process.
"lsof /home/tytso > /tmp/foo" didn't take long:
% time lsof /home/tytso >& /tmp/foo
real 0m0.144s
user 0m0.039s
sys 0m0.087s
And an strace of that same lsof command indicated had 67,889 lines.
So yeah, lots of system calls. But is this new system call really
going to speed up things by all that much?
If the argument is "make it easier to use", what's wrong the solution
of creating userspace libraries which abstract away calls to
open/read/close a whole bunch of procfs files to make life easier for
application programmers?
In short, what problem is this new system call going to solve? Each
new system call, especially with all of the parsing that this one is
going to use, is going to be an additional attack surface, and an
additional new system call that we have to maintain --- and for the
first 7-10 years, userspace programs are going to have to use the
existing open/read/close interface since enterprise kernels stick a
wrong for a L-O-N-G time, so any kind of ease-of-use argument isn't
really going to help application programs until RHEL 10 becomes
obsolete. (Unless you plan to backport this into RHEL 9 beta, but
still, waiting for RHEL 9 to become completely EOL is going to be... a
while.) So whatever the benefits of this new interface is going to
be, I suggest we should be sure that it's really worth it.
Cheers,
- Ted
Powered by blists - more mailing lists