lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YHZWFQp8seUUxHe9@zeniv-ca.linux.org.uk>
Date:   Wed, 14 Apr 2021 02:40:21 +0000
From:   Al Viro <viro@...iv.linux.org.uk>
To:     Gautham Ananthakrishna <gautham.ananthakrishna@...cle.com>
Cc:     linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, matthew.wilcox@...cle.com,
        khlebnikov@...dex-team.ru
Subject: Re: [PATCH RFC 0/6] fix the negative dentres bloating system memory
 usage

On Thu, Jan 21, 2021 at 06:49:39PM +0530, Gautham Ananthakrishna wrote:

> We tested this patch set recently and found it limiting negative dentry to a
> small part of total memory. The following is the test result we ran on two
> types of servers, one is 256G memory with 24 CPUS and another is 3T memory
> with 384 CPUS. The test case is using a lot of processes to generate negative
> dentry in parallel, the following is the test result after 72 hours, the
> negative dentry number is stable around that number even after running longer
> for much longer time. Without the patch set, in less than half an hour 197G was
> taken by negative dentry on 256G system, in 1 day 2.4T was taken on 3T system.
> 
> system memory   neg-dentry-number   neg-dentry-mem-usage
> 256G            55259084            10.6G
> 3T              202306756           38.8G
> 
> For perf test, we ran the following, and no regression found.
> 
> 1. create 1M negative dentry and then touch them to convert them to positive
>    dentry
> 
> 2. create 10K/100K/1M files
> 
> 3. remove 10K/100K/1M files
> 
> 4. kernel compile

Good for you; how would that work for thinner boxen, though?  I agree that if you
have 8M hash buckets your "no more than 3 unused negatives per bucket" is generous
enough for everything, but that's less obvious for something with e.g 4 or 8 gigs.
And believe it or not, there are real-world boxen like that ;-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ