[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200624092708.GA1749737@kroah.com>
Date: Wed, 24 Jun 2020 11:27:08 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: Rick Lindsley <ricklind@...ux.vnet.ibm.com>
Cc: Tejun Heo <tj@...nel.org>, Ian Kent <raven@...maw.net>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Andrew Morton <akpm@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>,
David Howells <dhowells@...hat.com>,
Miklos Szeredi <miklos@...redi.hu>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 0/6] kernfs: proposed locking and concurrency
improvement
On Wed, Jun 24, 2020 at 02:04:15AM -0700, Rick Lindsley wrote:
> In contrast, the provided patch fixes the observed problem with no ripple
> effect to other subsystems or utilities.
Your patch, as-is, is fine, but to somehow think that this is going to
solve your real problem here is the issue we keep raising.
The real problem is you have way too many devices that somehow need to
all get probed at boot time before you can do anything else.
> Greg had suggested
> Treat the system as a whole please, don't go for a short-term
> fix that we all know is not solving the real problem here.
>
> Your solution affects multiple subsystems; this one affects one. Which is
> the whole system approach in terms of risk? You mentioned you support 30k
> scsi disks but only because they are slow so the inefficiencies of kernfs
> don't show. That doesn't bother you?
Systems with 30k of devices do not have any problems that I know of
because they do not do foolish things like stall booting before they are
all discovered :)
What's the odds that if we take this patch, you all have to come back in
a few years with some other change to the api due to even larger memory
sizes happening? What happens if you boot on a system with this change
and with 10x the memory you currently have? Try simulating that by
creating 10x the number of devices and see what happens. Does the
bottleneck still remain in kernfs or is it somewhere else?
thanks,
greg k-h
Powered by blists - more mailing lists