[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UdmKpn3x_=F4E-u+mCf75hu4Bu0O0dyds4mHq93G6wJVA@mail.gmail.com>
Date: Thu, 20 Aug 2020 07:13:30 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: Alex Shi <alex.shi@...ux.alibaba.com>
Cc: Yang Shi <yang.shi@...ux.alibaba.com>,
kbuild test robot <lkp@...el.com>,
Rong Chen <rong.a.chen@...el.com>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Hugh Dickins <hughd@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
linux-mm <linux-mm@...ck.org>,
Shakeel Butt <shakeelb@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
Tejun Heo <tj@...nel.org>, cgroups@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Wei Yang <richard.weiyang@...il.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes
On Thu, Aug 20, 2020 at 2:51 AM Alex Shi <alex.shi@...ux.alibaba.com> wrote:
>
>
>
> 在 2020/8/19 下午10:57, Alexander Duyck 写道:
> >>> lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags);
> >> the lock bounce is better with the patch, would you like to do further
> >> like using add_lruvecs to reduce bounce more?
> >>
> >> Thanks
> >> Alex
> > I'm not sure how much doing something like that would add. In my case
> > I had a very specific issue that this is addressing which is the fact
> > that every compound page was taking the LRU lock and zone lock
> > separately. With this patch that is reduced to one LRU lock per 15
> > pages and then the zone lock per page. By adding or sorting pages by
> > lruvec I am not sure there will be much benefit as I am not certain
> > how often we will end up with pages being interleaved between multiple
> > lruvecs. In addition as I am limiting the quantity to a pagevec which
> > limits the pages to 15 I am not sure there will be much benefit to be
> > seen for sorting the pages beforehand.
> >
>
> the relock will unlock and get another lock again, the cost in that, the 2nd
> lock need to wait for fairness for concurrency lruvec locking.
> If we can do sort before, we should remove the fairness waiting here. Of course,
> perf result depends on scenarios.
Agreed. The question is in how many scenarios are you going to have
pages interleaved between more than one lruvec? I suspect in most
cases you should only have one lruvec for all the pages being
processed in a single pagevec.
Powered by blists - more mailing lists