[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YdhAjOkU4TkbFvVJ@kroah.com>
Date: Fri, 7 Jan 2022 14:30:52 +0100
From: Greg KH <gregkh@...uxfoundation.org>
To: Imran Khan <imran.f.khan@...cle.com>
Cc: Tejun Heo <tj@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 1/2] kernfs: use kernfs_node specific mutex and
spinlock.
On Fri, Jan 07, 2022 at 11:01:55PM +1100, Imran Khan wrote:
> Hi Tejun,
>
> On 7/1/22 7:30 am, Tejun Heo wrote:
> > Hello,
> >
> > On Tue, Jan 04, 2022 at 08:40:30AM +0100, Greg KH wrote:
> >>> We are seeing the launch time of some DB workloads adversely getting
> >>> affected with this contention.
> >>
> >> What workloads? sysfs should NEVER be in the fast-path of any normal
> >> operation, including booting. What benchmark or real-work is having
> >> problems here?
> >
> > In most systems, this shouldn't matter at all but sysfs and cgroupfs host a
> > lot of statistics files which may be read regularly. It is conceivable that
> > in large enough systems, the current locking scheme doesn't scale well
> > enough. We should definitely measure the overhead and gains tho.
> >
> > If this is something necessary, I think one possible solution is using
> > hashed locks. I know that it isn't a popular choice but it makes sense given
> > the constraints.
> >
>
> Could you please suggest me some current users of hashed locks ? I can
> check that code and modify my patches accordingly.
>
> As of now I have not found any standard benchmarks/workloads to show the
> impact of this contention. We have some in house DB applications where
> the impact can be easily seen. Of course those applications can be
> modified to get the needed data from somewhere else or access sysfs less
> frequently but nonetheless I am trying to make the current locking
> scheme more scalable.
Why are applications hitting sysfs so hard that this is noticable? What
in it is needed by userspace so badly? And what changed to make this a
requirement of them?
thanks,
greg k-h
Powered by blists - more mailing lists