[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <03cb9939-efbb-1ce8-42f5-c167ac5246da@oracle.com>
Date: Fri, 7 Jan 2022 23:01:55 +1100
From: Imran Khan <imran.f.khan@...cle.com>
To: Tejun Heo <tj@...nel.org>, Greg KH <gregkh@...uxfoundation.org>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 1/2] kernfs: use kernfs_node specific mutex and
spinlock.
Hi Tejun,
On 7/1/22 7:30 am, Tejun Heo wrote:
> Hello,
>
> On Tue, Jan 04, 2022 at 08:40:30AM +0100, Greg KH wrote:
>>> We are seeing the launch time of some DB workloads adversely getting
>>> affected with this contention.
>>
>> What workloads? sysfs should NEVER be in the fast-path of any normal
>> operation, including booting. What benchmark or real-work is having
>> problems here?
>
> In most systems, this shouldn't matter at all but sysfs and cgroupfs host a
> lot of statistics files which may be read regularly. It is conceivable that
> in large enough systems, the current locking scheme doesn't scale well
> enough. We should definitely measure the overhead and gains tho.
>
> If this is something necessary, I think one possible solution is using
> hashed locks. I know that it isn't a popular choice but it makes sense given
> the constraints.
>
Could you please suggest me some current users of hashed locks ? I can
check that code and modify my patches accordingly.
As of now I have not found any standard benchmarks/workloads to show the
impact of this contention. We have some in house DB applications where
the impact can be easily seen. Of course those applications can be
modified to get the needed data from somewhere else or access sysfs less
frequently but nonetheless I am trying to make the current locking
scheme more scalable.
Thanks
-- Imran
Powered by blists - more mailing lists