lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 14 Mar 2020 20:46:40 -0700
From:   Matthew Wilcox <willy@...radead.org>
To:     Waiman Long <longman@...hat.com>
Cc:     Alexander Viro <viro@...iv.linux.org.uk>,
        Jonathan Corbet <corbet@....net>,
        Luis Chamberlain <mcgrof@...nel.org>,
        Kees Cook <keescook@...omium.org>,
        Iurii Zaikin <yzaikin@...gle.com>,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-doc@...r.kernel.org,
        Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
        Eric Biggers <ebiggers@...gle.com>,
        Dave Chinner <david@...morbit.com>,
        Eric Sandeen <sandeen@...hat.com>
Subject: Re: [PATCH 00/11] fs/dcache: Limit # of negative dentries

On Wed, Feb 26, 2020 at 11:13:53AM -0500, Waiman Long wrote:
> As there is no limit for negative dentries, it is possible that a sizeable
> portion of system memory can be tied up in dentry cache slabs. Dentry slabs
> are generally recalimable if the dentries are in the LRUs. Still having
> too much memory used up by dentries can be problematic:
> 
>  1) When a filesystem with too many negative dentries is being unmounted,
>     the process of draining the dentries associated with the filesystem
>     can take some time. To users, the system may seem to hang for
>     a while.  The long wait may also cause unexpected timeout error or
>     other warnings.  This can happen when a long running container with
>     many negative dentries is being destroyed, for instance.
> 
>  2) Tying up too much memory in unused negative dentries means there
>     are less memory available for other use. Even though the kernel is
>     able to reclaim unused dentries when running out of free memory,
>     it will still introduce additional latency to the application
>     reducing its performance.

There's a third problem, which is that having a lot of negative dentries
can clog the hash chains.  I tried to quantify this, and found a weird result:

root@...o-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.402s
user	0m4.361s
sys	0m1.230s
root@...o-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.572s
user	0m4.337s
sys	0m1.407s
root@...o-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.607s
user	0m4.522s
sys	0m1.342s
root@...o-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.599s
user	0m4.472s
sys	0m1.369s
root@...o-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.574s
user	0m4.498s
sys	0m1.300s

Pretty consistent system time, between about 1.3 and 1.4 seconds.

root@...o-kvm:~# grep dentry /proc/slabinfo 
dentry             20394  21735    192   21    1 : tunables    0    0    0 : slabdata   1035   1035      0
root@...o-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m5.515s
user	0m4.353s
sys	0m1.359s

At this point, we have 20k dentries allocated.

Now, pollute the dcache with names that don't exist:

root@...o-kvm:~# for i in `seq 1 100000`; do cat /dev/null$i >/dev/zero; done 2>/dev/null
root@...o-kvm:~# grep dentry /proc/slabinfo 
dentry             20605  21735    192   21    1 : tunables    0    0    0 : slabdata   1035   1035      0

Huh.  We've kept the number of dentries pretty constant.  Still, maybe the
bad dentries have pushed out the good ones.

root@...o-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m6.644s
user	0m4.921s
sys	0m1.946s
root@...o-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m6.676s
user	0m5.004s
sys	0m1.909s
root@...o-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m6.662s
user	0m4.980s
sys	0m1.916s
root@...o-kvm:~# time for i in `seq 1 10000`; do cat /dev/null >/dev/zero; done
real	0m6.714s
user	0m4.973s
sys	0m1.986s

Well, we certainly made it suck.  Up to a pretty consistent 1.9-2.0 seconds
of kernel time, or 50% worse.  We've also made user time worse, somehow.

Anyhow, I should write a proper C program to measure this.  But I thought
I'd share this raw data with you now to demonstrate that dcache pollution
is a real problem today, even on a machine with 2GB.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ