[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191118120815.GF20752@bombadil.infradead.org>
Date: Mon, 18 Nov 2019 04:08:15 -0800
From: Matthew Wilcox <willy@...radead.org>
To: Alex Shi <alex.shi@...ux.alibaba.com>
Cc: Shakeel Butt <shakeelb@...gle.com>,
Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Tejun Heo <tj@...nel.org>, Hugh Dickins <hughd@...gle.com>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Vlastimil Babka <vbabka@...e.cz>,
Dan Williams <dan.j.williams@...el.com>,
Michal Hocko <mhocko@...e.com>,
Wei Yang <richard.weiyang@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Arun KS <arunks@...eaurora.org>
Subject: Re: [PATCH v3 1/7] mm/lru: add per lruvec lock for memcg
On Mon, Nov 18, 2019 at 10:44:57AM +0800, Alex Shi wrote:
>
>
> 在 2019/11/16 下午2:28, Shakeel Butt 写道:
> > On Fri, Nov 15, 2019 at 7:15 PM Alex Shi <alex.shi@...ux.alibaba.com> wrote:
> >>
> >> Currently memcg still use per node pgdat->lru_lock to guard its lruvec.
> >> That causes some lru_lock contention in a high container density system.
> >>
> >> If we can use per lruvec lock, that could relief much of the lru_lock
> >> contention.
> >>
> >> The later patches will replace the pgdat->lru_lock with lruvec->lru_lock
> >> and show the performance benefit by benchmarks.
> >
> > Merge this patch with actual usage. No need to have a separate patch.
>
> Thanks for comment, Shakeel!
>
> Yes, but considering the 3rd, huge and un-splitable patch of actully replacing, I'd rather to pull sth out from
> it. Ty to make patches a bit more readable, Do you think so?
This method of splitting the patches doesn't help with the reviewability of
the patch series.
Powered by blists - more mailing lists