[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e707fd66-16c2-8523-dd8b-860b5b6bb11d@linux.alibaba.com>
Date: Mon, 18 Nov 2019 10:44:57 +0800
From: Alex Shi <alex.shi@...ux.alibaba.com>
To: Shakeel Butt <shakeelb@...gle.com>
Cc: Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Tejun Heo <tj@...nel.org>, Hugh Dickins <hughd@...gle.com>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Dan Williams <dan.j.williams@...el.com>,
Michal Hocko <mhocko@...e.com>,
Wei Yang <richard.weiyang@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Arun KS <arunks@...eaurora.org>
Subject: Re: [PATCH v3 1/7] mm/lru: add per lruvec lock for memcg
在 2019/11/16 下午2:28, Shakeel Butt 写道:
> On Fri, Nov 15, 2019 at 7:15 PM Alex Shi <alex.shi@...ux.alibaba.com> wrote:
>>
>> Currently memcg still use per node pgdat->lru_lock to guard its lruvec.
>> That causes some lru_lock contention in a high container density system.
>>
>> If we can use per lruvec lock, that could relief much of the lru_lock
>> contention.
>>
>> The later patches will replace the pgdat->lru_lock with lruvec->lru_lock
>> and show the performance benefit by benchmarks.
>
> Merge this patch with actual usage. No need to have a separate patch.
Thanks for comment, Shakeel!
Yes, but considering the 3rd, huge and un-splitable patch of actully replacing, I'd rather to pull sth out from
it. Ty to make patches a bit more readable, Do you think so?
Thanks
Alex
Powered by blists - more mailing lists