[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171129165104.GE2873@qualcomm.com>
Date: Wed, 29 Nov 2017 16:51:05 +0000
From: Mike Marion <mmarion@...lcomm.com>
To: Ian Kent <raven@...maw.net>
CC: NeilBrown <neilb@...e.com>,
autofs mailing list <autofs@...r.kernel.org>,
Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Al Viro <viro@...iv.linux.org.uk>
Subject: Re: [PATCH 3/3] autofs - fix AT_NO_AUTOMOUNT not being honored
On Wed, Nov 29, 2017 at 02:00:31PM +0800, Ian Kent wrote:
> On 29/11/17 11:45, NeilBrown wrote:
> > On Wed, Nov 29 2017, Ian Kent wrote:
> >
> >> Adding Al Viro to the Cc list as I believe Stephen Whitehouse and
> >> Al have discussed something similar, please feel free to chime in
> >> with your thoughts Al.
> >>
> >> On 29/11/17 09:17, NeilBrown wrote:
> >>> On Tue, Nov 28 2017, Mike Marion wrote:
> >>>
> >>>> On Tue, Nov 28, 2017 at 07:43:05AM +0800, Ian Kent wrote:
> >>>>
> >>>>> I think the situation is going to get worse before it gets better.
> >>>>>
> >>>>> On recent Fedora and kernel, with a large map and heavy mount activity
> >>>>> I see:
> >>>>>
> >>>>> systemd, udisksd, gvfs-udisks2-volume-monitor, gvfsd-trash,
> >>>>> gnome-settings-daemon, packagekitd and gnome-shell
> >>>>>
> >>>>> all go crazy consuming large amounts of CPU.
> >>>>
> >>>> Yep. I'm not even worried about the CPU usage as much (yet, I'm sure
> >>>> it'll be more of a problem as time goes on). We have pretty huge
> >>>> direct maps and our initial startup tests on a new host with the link vs
> >>>> file took >6 hours. That's not a typo. We worked with Suse engineering
> >>>> to come up with a fix, which should've been pushed here some time ago.
> >>>>
> >>>> Then, there's shutdowns (and reboots). They also took a long time (on
> >>>> the order of 20+min) because it would walk the entire /proc/mounts
> >>>> "unmounting" things. Also fixed now. That one had something to do in
> >>>> SMP code as if you used a single CPU/core, it didn't take long at all.
> >>>>
> >>>> Just got a fix for the suse grub2-mkconfig script to fix their parsing
> >>>> looking for the root dev to skip over fstype autofs
> >>>> (probe_nfsroot_device function).
> >>>>
> >>>>> The symlink change was probably the start, now a number of applications
> >>>>> now got directly to the proc file system for this information.
> >>>>>
> >>>>> For large mount tables and many processes accessing the mount table
> >>>>> (probably reading the whole thing, either periodically or on change
> >>>>> notification) the current system does not scale well at all.
> >>>>
> >>>> We use Clearcase in some instances as well, and that's yet another thing
> >>>> adding mounts, and its startup is very slow, due to the size of
> >>>> /proc/mounts.
> >>>>
> >>>> It's definitely something that's more than just autofs and probably
> >>>> going to get worse, as you say.
> >>>
> >>> If we assume that applications are going to want to read
> >>> /proc/self/mount* a log, we probably need to make it faster.
> >>> I performed a simple experiment where I mounted 1000 tmpfs filesystems,
> >>> copied /proc/self/mountinfo to /tmp/mountinfo, then
> >>> ran 4 for loops in parallel catting one of these files to /dev/null 1000 times.
> >>> On a single CPU VM:
> >>> For /tmp/mountinfo, each group of 1000 cats took about 3 seconds.
> >>> For /proc/self/mountinfo, each group of 1000 cats took about 14 seconds.
> >>> On a 4 CPU VM
> >>> /tmp/mountinfo: 1.5secs
> >>> /proc/self/mountinfo: 3.5 secs
> >>>
> >>> Using "perf record" it appears that most of the cost is repeated calls
> >>> to prepend_path, with a small contribution from the fact that each read
> >>> only returns 4K rather than the 128K that cat asks for.
> >>>
> >>> If we could hang a cache off struct mnt_namespace and use it instead of
> >>> iterating the mount table - using rcu and ns->event to ensure currency -
> >>> we should be able to minimize the cost of this increased use of
> >>> /proc/self/mount*.
> >>>
> >>> I suspect that the best approach would be implement a cache at the
> >>> seq_file level.
> >>>
> >>> One possible problem might be if applications assume that a read will
> >>> always return a whole number of lines (it currently does). To be
> >>> sure we remain safe, we would only be able to use the cache for
> >>> a read() syscall which reads the whole file.
> >>> How big do people see /proc/self/mount* getting? What size reads
> >>> does 'strace' show the various programs using to read it?
> >>
> >> Buffer size almost always has a significant impact on IO so that's
> >> likely a big factor but the other aspect of this is notification
> >> of changes.
> >>
> >> The risk is improving the IO efficiency might just allow a higher
> >> rate of processing of change notifications and similar symptoms
> >> to what we have now.
> >
> > That's an issue that we should be able to get empirical data on.
> > Are these systems that demonstrate problems actually showing a high
> > rate of changes to the mount table, or is the mount table being
> > read frequently despite not changing?
> > To find out you could use a program like one of the answers to:
> >
> > https://stackoverflow.com/questions/5070801/monitoring-mount-point-changes-via-proc-mounts
> >
> > or instrument the kernel to add a counter to 'struct mnt_namespace' and
> > have mounts_open_common() increment that counter and report the value as
> > well as ns->event. The long term ratio of the two numbers might be
> > interesting.
>
> One scenario is, under heavy mount activity, the CPU usage of processes
> systemd, udisksd2, gvfs-udisks2-volume-monitor, gvfsd-trash,
> gnome-settings-daemon and packagekitd (and possibly gnome-shell, might
> be a false positive though) grow to consume all available CPU.
>
> The processes gvfs-udisks2-volume-monitor and gnome-settings-daemon
> (and possibly packagekitd, might be false positive) continue to use
> excessive CPU when the mount table is large but there is no mount/umount
> activity.
>
> In this case heavy mount activity means starting autofs with a direct
> mount map of 15k+ entries.
>
> The shutdown can be a problem too but umount(2) is just too slow for it
> to be as pronounced a problem as what we see at startup. The umount
> slowness appears to be constant. I'm not sure if it's proportional in any
> way to the number of mounts present on the system.
BTW, the previously mentioned fix we got from suse is (I'm pretty sure)
this one:
http://kernel.suse.com/cgit/kernel-source/commit/?id=10c43659465b18bd337f4434f41e133d09e08b13
Which simply changes a synchronize_rcu(); call to synchronize_rcu_expedited();
I do believe that they found it both proportional to the number of
mounts as well as something that only slowed down shutdown noticeably on
SMP hosts. That SMP bit was only stumbled upon when their initial
testing was on a single cpu VM, since that's about the only time anyone
runs with only 1 CPU these days.
> >>
> >> The suggestion is that a system that allows for incremental (diff
> >> type) update notification is needed to allow mount table propagation
> >> to scale well.
> >>
> >> That implies some as yet undefined user <-> kernel communication
> >> protocol.
> >
> > I can almost conceive a mountinfo variant where new entries always
> > appear at the end and deletions appear as just "$mnt_id".
> > struct proc_mounts would contain a bitmap similar to mnt_id_ida which
> > records which ids have been reported to this file. When an event is
> > noticed it checks for deletions and reports them before anything else.
> > Keeping track of location in the ns->list list might be tricky. It
> > could be done with a new 64bit never-reused mnt id, though some other
> > approach might be possible.
> >
> > An app would read to the end, then poll/select for exceptional events
> > and keep reading.
>
> Keeping track of modifications is certainly one of the problems and this
> sounds like a good start to resolving it ....
>
> I believe the current preferred delivery of this to be a shared library
> interface available to user space that talks to the kernel to obtain the
> needed information.
>
> Ian
--
Mike Marion-Unix SysAdmin/Sr. Staff IT Engineer-http://www.qualcomm.com
Powered by blists - more mailing lists