lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Jul 2018 10:21:20 -0700
From:   James Bottomley <James.Bottomley@...senPartnership.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     Waiman Long <longman@...hat.com>, Michal Hocko <mhocko@...nel.org>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Jonathan Corbet <corbet@....net>,
        "Luis R. Rodriguez" <mcgrof@...nel.org>,
        Kees Cook <keescook@...omium.org>,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, linux-doc@...r.kernel.org,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>,
        "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ingo Molnar <mingo@...nel.org>,
        Miklos Szeredi <mszeredi@...hat.com>,
        Larry Woodman <lwoodman@...hat.com>,
        "Wangkai (Kevin C)" <wangkai86@...wei.com>
Subject: Re: [PATCH v6 0/7] fs/dcache: Track & limit # of negative dentries

On Thu, 2018-07-12 at 09:49 -0700, Matthew Wilcox wrote:
> On Thu, Jul 12, 2018 at 09:04:54AM -0700, James Bottomley wrote:
[...]
> > The question I'm trying to get an answer to is why does the dentry
> > cache need special limits when the mm handling of the page cache
> > (and other mm caches) just works?
> 
> I don't know that it does work.  Or that it works well.

I'm not claiming the general heuristics are perfect (in fact I know we
still have a lot of problems with dirty reclaim and writeback).  I am
willing to bet that any discussion of the heuristics will get a lot of
opposition if we try to introduce per-object limits for every object.

Our clean cache heuristics are simple: clean caches are easy to reclaim
and are thus treated like free memory (there's little cost to filling
them or reclaiming them again).  There is speculation that this
equivalence is problematic because the shrinkers reclaim objects but mm
is looking to reclaim pages and thus you can end up with a few objects
pinning many pages even if the shrinker freed a lot of them.

However, we haven't even reached that level yet ... I'm still
struggling to establish that we have a problem with the behaviour of
the dentry cache under current mm heuristics.  

James

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ