[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4bcbd2e7-b5e3-6f45-51cf-8658f9c9009d@oracle.com>
Date: Wed, 16 Dec 2020 10:46:46 -0800
From: Junxiao Bi <junxiao.bi@...cle.com>
To: Konstantin Khlebnikov <koct9i@...il.com>
Cc: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-mm@...ck.org, Alexander Viro <viro@...iv.linux.org.uk>,
Waiman Long <longman@...hat.com>,
Gautham Ananthakrishna <gautham.ananthakrishna@...cle.com>,
matthew.wilcox@...cle.com
Subject: Re: [PATCH RFC 0/8] dcache: increase poison resistance
Hi Konstantin,
How would you like to proceed with this patch set?
This patchset as it is already fixed the customer issue we faced, it
will stop memory fragmentation causing by negative dentry and no
performance regression through our test. In production workload, it is
common that some app kept creating and removing tmp files, this will
leave a lot of negative dentry over time, some time later, it will cause
memory fragmentation and system run into memory compaction and not
responsible. It will be good to push it to upstream merge. If you are
busy, we can try push it again.
Thanks,
Junxiao.
On 12/14/20 3:10 PM, Junxiao Bi wrote:
> On 12/13/20 11:43 PM, Konstantin Khlebnikov wrote:
>
>>
>>
>> On Sun, Dec 13, 2020 at 9:52 PM Junxiao Bi <junxiao.bi@...cle.com
>> <mailto:junxiao.bi@...cle.com>> wrote:
>>
>> On 12/11/20 11:32 PM, Konstantin Khlebnikov wrote:
>>
>> > On Thu, Dec 10, 2020 at 2:01 AM Junxiao Bi
>> <junxiao.bi@...cle.com <mailto:junxiao.bi@...cle.com>
>> > <mailto:junxiao.bi@...cle.com <mailto:junxiao.bi@...cle.com>>>
>> wrote:
>> >
>> > Hi Konstantin,
>> >
>> > We tested this patch set recently and found it limiting
>> negative
>> > dentry
>> > to a small part of total memory. And also we don't see any
>> > performance
>> > regression on it. Do you have any plan to integrate it into
>> > mainline? It
>> > will help a lot on memory fragmentation issue causing by
>> dentry slab,
>> > there were a lot of customer cases where sys% was very high
>> since
>> > most
>> > cpu were doing memory compaction, dentry slab was taking too
>> much
>> > memory
>> > and nearly all dentry there were negative.
>> >
>> >
>> > Right now I don't have any plans for this. I suspect such
>> problems will
>> > appear much more often since machines are getting bigger.
>> > So, somebody will take care of it.
>> We already had a lot of customer cases. It made no sense to leave so
>> many negative dentry in the system, it caused memory fragmentation
>> and
>> not much benefit.
>>
>>
>> Dcache could grow so big only if the system lacks of memory pressure.
>>
>> Simplest solution is a cronjob which provinces such pressure by
>> creating sparse file on disk-based fs and then reading it.
>> This should wash away all inactive caches with no IO and zero chance
>> of oom.
> Sound good, will try.
>>
>> >
>> > First part which collects negative dentries at the end list of
>> > siblings could be
>> > done in a more obvious way by splitting the list in two.
>> > But this touches much more code.
>> That would add new field to dentry?
>>
>>
>> Yep. Decision is up to maintainers.
>>
>> >
>> > Last patch isn't very rigid but does non-trivial changes.
>> > Probably it's better to call some garbage collector thingy
>> periodically.
>> > Lru list needs pressure to age and reorder entries properly.
>>
>> Swap the negative dentry to the head of hash list when it get
>> accessed?
>> Extra ones can be easily trimmed when swapping, using GC is to
>> reduce
>> perf impact?
>>
>>
>> Reclaimer/shrinker scans denties in LRU lists, it's an another list.
>
> Ah, you mean GC to reclaim from LRU list. I am not sure it could catch
> up the speed of negative dentry generating.
>
> Thanks,
>
> Junxiao.
>
>> My patch used order in hash lists is a very unusual way. Don't be
>> confused.
>>
>> There are four lists
>> parent - siblings
>> hashtable - hashchain
>> LRU
>> inode - alias
>>
>>
>> Thanks,
>>
>> Junxioao.
>>
>> >
>> > Gc could be off by default or thresholds set very high (50% of
>> ram for
>> > example).
>> > Final setup could be left up to owners of large systems, which
>> needs
>> > fine tuning.
>>
Powered by blists - more mailing lists