[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <befb09a5f62852a828ac959acbad5d5e50c967de.camel@themaw.net>
Date: Tue, 23 Jun 2020 19:51:12 +0800
From: Ian Kent <raven@...maw.net>
To: Rick Lindsley <ricklind@...ux.vnet.ibm.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Tejun Heo <tj@...nel.org>, Stephen Rothwell <sfr@...b.auug.org.au>,
Andrew Morton <akpm@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>,
David Howells <dhowells@...hat.com>,
Miklos Szeredi <miklos@...redi.hu>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 0/6] kernfs: proposed locking and concurrency
improvement
On Tue, 2020-06-23 at 02:33 -0700, Rick Lindsley wrote:
> On 6/22/20 11:02 PM, Greg Kroah-Hartman wrote:
>
> > First off, this is not my platform, and not my problem, so it's
> > funny
> > you ask me :)
>
> Weeeelll, not your platform perhaps but MAINTAINERS does list you
> first and Tejun second as maintainers for kernfs. So in that sense,
> any patches would need to go thru you. So, your opinions do matter.
>
>
> > Anyway, as I have said before, my first guesses would be:
> > - increase the granularity size of the "memory chunks",
> > reducing
> > the number of devices you create.
>
> This would mean finding every utility that relies on this
> behavior. That may be possible, although not easy, for distro or
> platform software, but it's hard to guess what user-related utilities
> may have been created by other consumers of those distros or that
> platform. In any case, removing an interface without warning is a
> hanging offense in many Linux circles.
>
> > - delay creating the devices until way after booting, or do it
> > on a totally different path/thread/workqueue/whatever to
> > prevent delay at booting
>
> This has been considered, but it again requires a full list of
> utilities relying on this interface and determining which of them may
> want to run before the devices are "loaded" at boot time. It may be
> few, or even zero, but it would be a much more disruptive change in
> the boot process than what we are suggesting.
>
> > And then there's always:
> > - don't create them at all, only only do so if userspace asks
> > you to.
>
> If they are done in parallel on demand, you'll see the same problem
> (load average of 1000+, contention in the same spot.) You obviously
> won't hold up the boot, of course, but your utility and anything else
> running on the machine will take an unexpected pause ... for
> somewhere between 30 and 90 minutes. Seems equally unfriendly.
>
> A variant of this, which does have a positive effect, is to observe
> that coldplug during initramfs does seem to load up the memory device
> tree without incident. We do a second coldplug after we switch roots
> and this is the one that runs into timer issues. I have asked "those
> that should know" why there is a second coldplug. I can guess but
> would prefer to know to avoid that screaming option. If that second
> coldplug is unnecessary for the kernfs memory interfaces to work
> correctly, then that is an alternate, and perhaps even better
> solution. (It wouldn't change the fact that kernfs was not built for
> speed and this problem remains below the surface to trip up another.)
We might still need the patches here for that on-demand mechanism
to be feasible.
For example, for an ls of the node directory it should be doable to
enumerate the nodes in readdir without creating dentries but there's
the inevitable stat() of each path that follows that would probably
lead to similar contention.
And changing the division of the entries into sub-directories would
inevitably break anything that does actually need to access them.
Ian
Powered by blists - more mailing lists