[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a88eef1b-242d-78c6-fecb-35ea00cd739b@linux.alibaba.com>
Date: Thu, 20 Aug 2020 17:49:49 +0800
From: Alex Shi <alex.shi@...ux.alibaba.com>
To: Alexander Duyck <alexander.duyck@...il.com>
Cc: Yang Shi <yang.shi@...ux.alibaba.com>,
kbuild test robot <lkp@...el.com>,
Rong Chen <rong.a.chen@...el.com>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Hugh Dickins <hughd@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
linux-mm <linux-mm@...ck.org>,
Shakeel Butt <shakeelb@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
Tejun Heo <tj@...nel.org>, cgroups@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Wei Yang <richard.weiyang@...il.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes
在 2020/8/19 下午10:57, Alexander Duyck 写道:
>>> lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags);
>> the lock bounce is better with the patch, would you like to do further
>> like using add_lruvecs to reduce bounce more?
>>
>> Thanks
>> Alex
> I'm not sure how much doing something like that would add. In my case
> I had a very specific issue that this is addressing which is the fact
> that every compound page was taking the LRU lock and zone lock
> separately. With this patch that is reduced to one LRU lock per 15
> pages and then the zone lock per page. By adding or sorting pages by
> lruvec I am not sure there will be much benefit as I am not certain
> how often we will end up with pages being interleaved between multiple
> lruvecs. In addition as I am limiting the quantity to a pagevec which
> limits the pages to 15 I am not sure there will be much benefit to be
> seen for sorting the pages beforehand.
>
the relock will unlock and get another lock again, the cost in that, the 2nd
lock need to wait for fairness for concurrency lruvec locking.
If we can do sort before, we should remove the fairness waiting here. Of course,
perf result depends on scenarios.
Thanks
Alex
Powered by blists - more mailing lists