[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cb49bbc7-b0c0-65cc-1d9d-a3aaef075650@redhat.com>
Date: Mon, 16 Dec 2019 12:36:14 +0100
From: David Hildenbrand <david@...hat.com>
To: Alexander Duyck <alexander.duyck@...il.com>, kvm@...r.kernel.org,
mst@...hat.com, linux-kernel@...r.kernel.org, willy@...radead.org,
mhocko@...nel.org, linux-mm@...ck.org, akpm@...ux-foundation.org,
mgorman@...hsingularity.net, vbabka@...e.cz
Cc: yang.zhang.wz@...il.com, nitesh@...hat.com, konrad.wilk@...cle.com,
pagupta@...hat.com, riel@...riel.com, lcapitulino@...hat.com,
dave.hansen@...el.com, wei.w.wang@...el.com, aarcange@...hat.com,
pbonzini@...hat.com, dan.j.williams@...el.com,
alexander.h.duyck@...ux.intel.com, osalvador@...e.de
Subject: Re: [PATCH v15 3/7] mm: Add function __putback_isolated_page
[...]
> +/**
> + * __putback_isolated_page - Return a now-isolated page back where we got it
> + * @page: Page that was isolated
> + * @order: Order of the isolated page
> + *
> + * This function is meant to return a page pulled from the free lists via
> + * __isolate_free_page back to the free lists they were pulled from.
> + */
> +void __putback_isolated_page(struct page *page, unsigned int order)
> +{
> + struct zone *zone = page_zone(page);
> + unsigned long pfn;
> + unsigned int mt;
> +
> + /* zone lock should be held when this function is called */
> + lockdep_assert_held(&zone->lock);
> +
> + pfn = page_to_pfn(page);
> + mt = get_pfnblock_migratetype(page, pfn);
IMHO get_pageblock_migratetype() would be nicer - I guess the compiler
will optimize out the double page_to_pfn().
> +
> + /* Return isolated page to tail of freelist. */
> + __free_one_page(page, pfn, zone, order, mt);
> +}
> +
> /*
> * Update NUMA hit/miss statistics
> *
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index 04ee1663cdbe..d93d2be0070f 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -134,13 +134,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
> __mod_zone_freepage_state(zone, nr_pages, migratetype);
> }
> set_pageblock_migratetype(page, migratetype);
> + if (isolated_page)
> + __putback_isolated_page(page, order);
> zone->nr_isolate_pageblock--;
> out:
> spin_unlock_irqrestore(&zone->lock, flags);
> - if (isolated_page) {
> - post_alloc_hook(page, order, __GFP_MOVABLE);
> - __free_pages(page, order);
> - }
So If I get it right:
post_alloc_hook() does quite some stuff like
- arch_alloc_page(page, order);
- kernel_map_pages(page, 1 << order, 1)
- kasan_alloc_pages()
- kernel_poison_pages(1)
- set_page_owner()
Which free_pages_prepare() will undo, like
- reset_page_owner()
- kernel_poison_pages(0)
- arch_free_page()
- kernel_map_pages()
- kasan_free_nondeferred_pages()
Both would be skipped now - which sounds like the right thing to do IMHO
(and smells like quite a performance improvement). I haven't verified if
actually everything we skip in free_pages_prepare() is safe (I think it
is, it seems to be mostly relevant for actually used/allocated pages).
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists