[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJfpegvYGB01i9eqCH-95Ynqy0P=CuxPCSAbSpBPa-TV8iXN0Q@mail.gmail.com>
Date: Tue, 7 Apr 2020 15:59:10 +0200
From: Miklos Szeredi <miklos@...redi.hu>
To: Ian Kent <raven@...maw.net>
Cc: Lennart Poettering <mzxreary@...inter.de>,
David Howells <dhowells@...hat.com>,
Christian Brauner <christian.brauner@...ntu.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>, dray@...hat.com,
Karel Zak <kzak@...hat.com>,
Miklos Szeredi <mszeredi@...hat.com>,
Steven Whitehouse <swhiteho@...hat.com>,
Jeff Layton <jlayton@...hat.com>, andres@...razel.de,
keyrings@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, Aleksa Sarai <cyphar@...har.com>
Subject: Re: Upcoming: Notifications, FS notifications and fsinfo()
On Tue, Apr 7, 2020 at 4:22 AM Ian Kent <raven@...maw.net> wrote:
> > Right now, when you have n mounts, and any mount changes, or one is
> > added or removed then we have to parse the whole mount table again,
> > asynchronously, processing all n entries again, every frickin
> > time. This means the work to process n mounts popping up at boot is
> > O(n²). That sucks, it should be obvious to anyone. Now if we get that
> > fixed, by some mount API that can send us minimal notifications about
> > what happened and where, then this becomes O(n), which is totally OK.
Something's not right with the above statement. Hint: if there are
lots of events in quick succession, you can batch them quite easily to
prevent overloading the system.
Wrote a pair of utilities to check out the capabilities of the current
API. The first one just creates N mounts, optionally sleeping
between each. The second one watches /proc/self/mountinfo and
generates individual (add/del/change) events based on POLLPRI and
comparing contents with previous instance.
First use case: create 10,000 mounts, then start the watcher and
create 1000 mounts with a 50ms sleep between them. Total time (user +
system) consumed by the watcher: 25s. This is indeed pretty dismal,
and a per-mount query will help tremendously. But it's still "just"
25ms per mount, so if the mounts are far apart (which is what this
test is about), this won't thrash the system. Note, how this is self
regulating: if the load is high, it will automatically batch more
requests, preventing overload. It is also prone to lose pairs of add
+ remove in these case (and so is the ring buffer based one from
David).
Second use case: start the watcher and create 50,000 mounts with no
sleep between them. Total time consumed by the watcher: 0.154s or
3.08us/event. Note, the same test case adds about 5ms for the
50,000 umount events, which is 0.1us/event.
Real life will probably be between these extremes, but it's clear that
there's room for improvement in userspace as well as kernel
interfaces. The current kernel interface is very efficient in
retrieving a lot of state in one go. It is not efficient in handling
small differences.
> > Anyway, I have the suspicion this discussion has stopped being
> > useful. I think you are trying to fix problems that userspce actually
> > doesn't have. I can just tell you what we understand the problems
> > are,
> > but if you are out trying to fix other percieved ones, then great,
> > but
> > I mostly lost interest.
I was, and still am, trying to see the big picture.
Whatever. I think it's your turn to show some numbers about how the
new API improves performance of systemd with a large number of mounts.
Thanks,
Miklos
View attachment "many-mounts.c" of type "text/x-csrc" (1155 bytes)
View attachment "watch_mounts.c" of type "text/x-csrc" (3380 bytes)
Powered by blists - more mailing lists