[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e5a41984-998f-730f-852b-3de82b582d01@suse.cz>
Date: Wed, 14 Apr 2021 19:21:42 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Mel Gorman <mgorman@...hsingularity.net>,
Linux-MM <linux-mm@...ck.org>,
Linux-RT-Users <linux-rt-users@...r.kernel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Chuck Lever <chuck.lever@...cle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Michal Hocko <mhocko@...nel.org>
Subject: Re: [PATCH 07/11] mm/page_alloc: Remove duplicate checks if
migratetype should be isolated
On 4/14/21 3:39 PM, Mel Gorman wrote:
> Both free_pcppages_bulk() and free_one_page() have very similar
> checks about whether a page's migratetype has changed under the
> zone lock. Use a common helper.
>
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
Seems like for free_pcppages_bulk() this patch makes it check for each page on
the pcplist
- zone->nr_isolate_pageblock != 0 instead of local bool (the performance might
be the same I guess on modern cpu though)
- is_migrate_isolate(migratetype) for a migratetype obtained by
get_pcppage_migratetype() which cannot be migrate_isolate so the check is useless.
As such it doesn't seem a worthwhile cleanup to me considering all the other
microoptimisations?
> ---
> mm/page_alloc.c | 32 ++++++++++++++++++++++----------
> 1 file changed, 22 insertions(+), 10 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 295624fe293b..1ed370668e7f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1354,6 +1354,23 @@ static inline void prefetch_buddy(struct page *page)
> prefetch(buddy);
> }
>
> +/*
> + * The migratetype of a page may have changed due to isolation so check.
> + * Assumes the caller holds the zone->lock to serialise against page
> + * isolation.
> + */
> +static inline int
> +check_migratetype_isolated(struct zone *zone, struct page *page, unsigned long pfn, int migratetype)
> +{
> + /* If isolating, check if the migratetype has changed */
> + if (unlikely(has_isolate_pageblock(zone) ||
> + is_migrate_isolate(migratetype))) {
> + migratetype = get_pfnblock_migratetype(page, pfn);
> + }
> +
> + return migratetype;
> +}
> +
> /*
> * Frees a number of pages from the PCP lists
> * Assumes all pages on list are in same zone, and of same order.
> @@ -1371,7 +1388,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> int migratetype = 0;
> int batch_free = 0;
> int prefetch_nr = READ_ONCE(pcp->batch);
> - bool isolated_pageblocks;
> struct page *page, *tmp;
> LIST_HEAD(head);
>
> @@ -1433,21 +1449,20 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> * both PREEMPT_RT and non-PREEMPT_RT configurations.
> */
> spin_lock(&zone->lock);
> - isolated_pageblocks = has_isolate_pageblock(zone);
>
> /*
> * Use safe version since after __free_one_page(),
> * page->lru.next will not point to original list.
> */
> list_for_each_entry_safe(page, tmp, &head, lru) {
> + unsigned long pfn = page_to_pfn(page);
> int mt = get_pcppage_migratetype(page);
> +
> /* MIGRATE_ISOLATE page should not go to pcplists */
> VM_BUG_ON_PAGE(is_migrate_isolate(mt), page);
> - /* Pageblock could have been isolated meanwhile */
> - if (unlikely(isolated_pageblocks))
> - mt = get_pageblock_migratetype(page);
>
> - __free_one_page(page, page_to_pfn(page), zone, 0, mt, FPI_NONE);
> + mt = check_migratetype_isolated(zone, page, pfn, mt);
> + __free_one_page(page, pfn, zone, 0, mt, FPI_NONE);
> trace_mm_page_pcpu_drain(page, 0, mt);
> }
> spin_unlock(&zone->lock);
> @@ -1459,10 +1474,7 @@ static void free_one_page(struct zone *zone,
> int migratetype, fpi_t fpi_flags)
> {
> spin_lock(&zone->lock);
> - if (unlikely(has_isolate_pageblock(zone) ||
> - is_migrate_isolate(migratetype))) {
> - migratetype = get_pfnblock_migratetype(page, pfn);
> - }
> + migratetype = check_migratetype_isolated(zone, page, pfn, migratetype);
> __free_one_page(page, pfn, zone, order, migratetype, fpi_flags);
> spin_unlock(&zone->lock);
> }
>
Powered by blists - more mailing lists