lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJfpegvhW-EHkJ_KPm5mYA9igt0QRz1ZwwDZE7qj6HFvkVdyHA@mail.gmail.com>
Date:   Thu, 3 May 2018 10:18:01 +0200
From:   Miklos Szeredi <miklos@...redi.hu>
To:     Al Viro <viro@...iv.linux.org.uk>
Cc:     Miklos Szeredi <mszeredi@...hat.com>, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] dcache: fix quadratic behavior with parallel shrinkers

On Thu, May 3, 2018 at 9:44 AM, Miklos Szeredi <miklos@...redi.hu> wrote:
> On Thu, May 3, 2018 at 12:45 AM, Al Viro <viro@...iv.linux.org.uk> wrote:
>> On Thu, May 03, 2018 at 12:26:35AM +0200, Miklos Szeredi wrote:
>>> When multiple shrinkers are operating on a directory containing many
>>> dentries, it takes much longer than if only one shrinker is operating on
>>> the directory.
>>>
>>> Call the shrinker instances A and B, which shrink DIR containing NUM
>>> dentries.
>>>
>>> Assume A wins the race for locking DIR's d_lock, then it goes onto moving
>>> all unlinked dentries to its dispose list.  When it's done, then B will
>>> scan the directory once again, but will find that all dentries are already
>>> being shrunk, so it will have an empty dispose list.  Both A and B will
>>> have found NUM dentries (data.found == NUM).
>>>
>>> Now comes the interesting part: A will proceed to shrink the dispose list
>>> by killing individual dentries and decrementing the refcount of the parent
>>> (which is DIR).  NB: decrementing DIR's refcount will block if DIR's d_lock
>>> is held.  B will shrink a zero size list and then immediately restart
>>> scanning the directory, where it will lock DIR's d_lock, scan the remaining
>>> dentries and find no dentry to dispose.
>>>
>>> So that results in B doing the directory scan over and over again, holding
>>> d_lock of DIR, while A is waiting for a chance to decrement refcount of DIR
>>> and making very slow progress because of this.  B is wasting time and
>>> holding up progress of A at the same time.
>>>
>>> Proposed fix is to check this situation in B (found some dentries, but
>>> all are being shrunk already) and just sleep for some time, before retrying
>>> the scan.  The sleep is proportional to the number of found dentries.
>>
>> The thing is, the majority of massive shrink_dcache_parent() can be killed.
>> Let's do that first and see if anything else is really needed.
>>
>> As it is, rmdir() and rename() are ridiculously bad - they should only call
>> shrink_dcache_parent() after successful ->rmdir() or ->rename().  Sure,
>> there are other places where we do large shrink_dcache_parent() runs,
>> but those won't trigger in parallel on the same tree.
>
> I think we are cat hit this also with lru pruner (prune_dcache_sb(),
> shrink_dcache_sb()) running in parallel with shrink_dcache_parent().
> Although shrink_dcache_sb() looks better in this regard, since it will
> only hold up to 1024 dentries in the dispose list.

Looking more, prune_dcache_sb() will also batch with a max of 1024
objects.  Which mitigates the problem, but doesn't make it go away.
Killing 1024 dentries still takes on the order of 100us without
contention on d_lock.  If shrink_dcache_parent() is busy looping on
those dentries, then contention will make this much worse.

Thanks,
Miklos

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ