[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe35e57a-df04-3e77-a717-48483c01701a@themaw.net>
Date: Tue, 28 Nov 2017 07:43:05 +0800
From: Ian Kent <raven@...maw.net>
To: Mike Marion <mmarion@...lcomm.com>
Cc: NeilBrown <neilb@...e.com>, Al Viro <viro@...IV.linux.org.uk>,
Colin Walters <walters@...hat.com>,
Ondrej Holy <oholy@...hat.com>,
autofs mailing list <autofs@...r.kernel.org>,
Kernel Mailing List <linux-kernel@...r.kernel.org>,
David Howells <dhowells@...hat.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 3/3] autofs - fix AT_NO_AUTOMOUNT not being honored
On 28/11/17 00:01, Mike Marion wrote:
> On Thu, Nov 23, 2017 at 08:36:49AM +0800, Ian Kent wrote:
>
>> And with the move of userspace to use /proc based mount tables (one
>> example being the symlink of /etc/mtab into /proc) even modest sized
>> direct mount maps will be a problem with every entry getting mounted.
>>
>> Systems will cope with this fine but larger systems not so much.
>
> Yes.. we've run into some big issues due to the change of /etc/mtab from
> a file to a symlink to /proc/self/mounts. Most have been worked around
> thus far (mostly due to Suse coming up with patches) but still have a
> few annoying ones.
>
I think the situation is going to get worse before it gets better.
On recent Fedora and kernel, with a large map and heavy mount activity
I see:
systemd, udisksd, gvfs-udisks2-volume-monitor, gvfsd-trash,
gnome-settings-daemon, packagekitd and gnome-shell
all go crazy consuming large amounts of CPU.
Once the mount activity is completed I see two processes continue to
consume a large amount of CPU. I thought one of those two was systemd
but my notes say they were gvfs-udisks2-volume-monitor and
gnome-settings-daemon.
The symlink change was probably the start, now a number of applications
now got directly to the proc file system for this information.
For large mount tables and many processes accessing the mount table
(probably reading the whole thing, either periodically or on change
notification) the current system does not scale well at all.
Ian
Powered by blists - more mailing lists