[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200227223945.GN10737@dread.disaster.area>
Date: Fri, 28 Feb 2020 09:39:45 +1100
From: Dave Chinner <david@...morbit.com>
To: Eric Sandeen <sandeen@...hat.com>
Cc: Matthew Wilcox <willy@...radead.org>,
Waiman Long <longman@...hat.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Jonathan Corbet <corbet@....net>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-doc@...r.kernel.org,
Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
Eric Biggers <ebiggers@...gle.com>
Subject: Re: [PATCH 00/11] fs/dcache: Limit # of negative dentries
On Thu, Feb 27, 2020 at 11:04:40AM -0800, Eric Sandeen wrote:
> On 2/26/20 8:29 AM, Matthew Wilcox wrote:
> > On Wed, Feb 26, 2020 at 11:13:53AM -0500, Waiman Long wrote:
> >> A new sysctl parameter "dentry-dir-max" is introduced which accepts a
> >> value of 0 (default) for no limit or a positive integer 256 and up. Small
> >> dentry-dir-max numbers are forbidden to avoid excessive dentry count
> >> checking which can impact system performance.
> >
> > This is always the wrong approach. A sysctl is just a way of blaming
> > the sysadmin for us not being very good at programming.
> >
> > I agree that we need a way to limit the number of negative dentries.
> > But that limit needs to be dynamic and depend on how the system is being
> > used, not on how some overworked sysadmin has configured it.
> >
> > So we need an initial estimate for the number of negative dentries that
> > we need for good performance. Maybe it's 1000. It doesn't really matter;
> > it's going to change dynamically.
> >
> > Then we need a metric to let us know whether it needs to be increased.
> > Perhaps that's "number of new negative dentries created in the last
> > second". And we need to decide how much to increase it; maybe it's by
> > 50% or maybe by 10%. Perhaps somewhere between 10-100% depending on
> > how high the recent rate of negative dentry creation has been.
>
> There are pitfalls to this approach as well. Consider what libnss
> does every time it starts up (via curl in this case)
>
> # cat /proc/sys/fs/dentry-state
> 3154271 3131421 45 0 2863333 0
> # for I in `seq 1 10`; do curl https://sandeen.net/ &>/dev/null; done
> # cat /proc/sys/fs/dentry-state
> 3170738 3147844 45 0 2879882 0
>
> voila, 16k more negative dcache entries, thanks to:
>
> https://github.com/nss-dev/nss/blob/317cb06697d5b953d825e050c1d8c1ee0d647010/lib/softoken/sdb.c#L390
>
> i.e. each time it inits, it will intentionally create up to 10,000 negative
> dentries which will never be looked up again.
Sandboxing via memcg restricted cgroups means users and/or
applications cannot create unbound numbers of negative dentries, and
that largely solves this problem.
For a system daemons whose environment is controlled by a
systemd unit file, this should be pretty trivial to do, and memcg
directed memory reclaim will control negative dentry buildup.
For short-lived applications, teardown of the cgroup will free
all the negative dentries it created - they don't hang around
forever.
For long lived applications, negative dentries are bound by the
application memcg limits, and buildup will only affect the
applications own performance, not that of the whole system.
IOWs, I'd expect this sort of resource control problem to be solved
at the user, application and/or distro level, not with a huge kernel
hammer.
> I /think/ the original intent of this work was to limit such rogue
> applications, so scaling with use probably isn't the way to go.
The original intent was to prevent problems on old kernels that
supported terabytes of memory but could not use cgroup/memcg
infrastructure to isolate and contain negative dentry growth.
That was a much simpler, targeted negative dentry limiting solution,
not the ... craziness that can be found in this patchset.
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists