[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yd81V5AYz/sMps9F@slm.duckdns.org>
Date: Wed, 12 Jan 2022 10:08:55 -1000
From: Tejun Heo <tj@...nel.org>
To: Imran Khan <imran.f.khan@...cle.com>
Cc: Greg KH <gregkh@...uxfoundation.org>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 1/2] kernfs: use kernfs_node specific mutex and
spinlock.
Hello,
On Tue, Jan 11, 2022 at 10:42:31AM +1100, Imran Khan wrote:
> The database application has a health monitoring component which
> regularly collects stats from sysfs. With small number of databases this
> was not an issue but recently several customers did some consolidation
> and ended up having hundreds of databases, all running on the same
> server and in those setups the contention became more and more evident.
> As more and more customers are consolidating we have started to get more
> occurences of this issue and in this case it all depends on number of
> running databases on the server.
>
> I will have to reach out to application team to get a list of all sysfs
> files being accessed but one of them is
> "/sys/class/infiniband/<device>/ports/<port number>/gids/<gid index>".
I can imagine a similar scenario w/ cgroups with heavy stacking - each
application fetches its own stat regularly which isn't a problem in
isolation but once you put thousands of them on a machine, the shared lock
can get pretty hot, and the cgroup scenario probably is more convincing in
that they'd be hitting different files but the lock gets hot it is shared
across them.
Greg, I think the call for better scalability for read operations is
reasonably justified especially for heavy workload stacking which is a valid
use case and likely to become more prevalent. Given the requirements, hashed
locking seems like the best solution here. It doesn't cause noticeable space
overhead and is pretty easy to scale. What do you think?
Thanks.
--
tejun
Powered by blists - more mailing lists