[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200918021654.GC54754@L-31X9LVDL-1304.local>
Date: Fri, 18 Sep 2020 10:16:54 +0800
From: Wei Yang <richard.weiyang@...ux.alibaba.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-hyperv@...r.kernel.org, xen-devel@...ts.xenproject.org,
linux-acpi@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Michal Hocko <mhocko@...nel.org>,
Dave Hansen <dave.hansen@...el.com>,
Vlastimil Babka <vbabka@...e.cz>,
Wei Yang <richard.weiyang@...ux.alibaba.com>,
Oscar Salvador <osalvador@...e.de>,
Mike Rapoport <rppt@...nel.org>,
Scott Cheloha <cheloha@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>
Subject: Re: [PATCH RFC 2/4] mm/page_alloc: place pages to tail in
__putback_isolated_page()
On Wed, Sep 16, 2020 at 08:34:09PM +0200, David Hildenbrand wrote:
>__putback_isolated_page() already documents that pages will be placed to
>the tail of the freelist - this is, however, not the case for
>"order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
>the case for all existing users.
>
>This change affects two users:
>- free page reporting
>- page isolation, when undoing the isolation.
>
>This behavior is desireable for pages that haven't really been touched
>lately, so exactly the two users that don't actually read/write page
>content, but rather move untouched pages.
>
>The new behavior is especially desirable for memory onlining, where we
>allow allocation of newly onlined pages via undo_isolate_page_range()
>in online_pages(). Right now, we always place them to the head of the
>free list, resulting in undesireable behavior: Assume we add
>individual memory chunks via add_memory() and online them right away to
>the NORMAL zone. We create a dependency chain of unmovable allocations
>e.g., via the memmap. The memmap of the next chunk will be placed onto
>previous chunks - if the last block cannot get offlined+removed, all
>dependent ones cannot get offlined+removed. While this can already be
>observed with individual DIMMs, it's more of an issue for virtio-mem
>(and I suspect also ppc DLPAR).
>
>Note: If we observe a degradation due to the changed page isolation
>behavior (which I doubt), we can always make this configurable by the
>instance triggering undo of isolation (e.g., alloc_contig_range(),
>memory onlining, memory offlining).
>
>Cc: Andrew Morton <akpm@...ux-foundation.org>
>Cc: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
>Cc: Mel Gorman <mgorman@...hsingularity.net>
>Cc: Michal Hocko <mhocko@...nel.org>
>Cc: Dave Hansen <dave.hansen@...el.com>
>Cc: Vlastimil Babka <vbabka@...e.cz>
>Cc: Wei Yang <richard.weiyang@...ux.alibaba.com>
>Cc: Oscar Salvador <osalvador@...e.de>
>Cc: Mike Rapoport <rppt@...nel.org>
>Cc: Scott Cheloha <cheloha@...ux.ibm.com>
>Cc: Michael Ellerman <mpe@...erman.id.au>
>Signed-off-by: David Hildenbrand <david@...hat.com>
>---
> mm/page_alloc.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>index 91cefb8157dd..bba9a0f60c70 100644
>--- a/mm/page_alloc.c
>+++ b/mm/page_alloc.c
>@@ -89,6 +89,12 @@ typedef int __bitwise fop_t;
> */
> #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
>
>+/*
>+ * Place the freed page to the tail of the freelist after buddy merging. Will
>+ * get ignored with page shuffling enabled.
>+ */
>+#define FOP_TO_TAIL ((__force fop_t)BIT(1))
>+
> /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
> static DEFINE_MUTEX(pcp_batch_high_lock);
> #define MIN_PERCPU_PAGELIST_FRACTION (8)
>@@ -1040,6 +1046,8 @@ static inline void __free_one_page(struct page *page, unsigned long pfn,
>
> if (is_shuffle_order(order))
> to_tail = shuffle_pick_tail();
>+ else if (fop_flags & FOP_TO_TAIL)
>+ to_tail = true;
Take another look into this part. Maybe we can move this check at top?
For online_page case, currently we have following call flow:
online_page
online_pages_range
shuffle_zone
This means we would always shuffle the newly added pages. Maybe we don't need
to do the shuffle when adding them to the free_list?
> else
> to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
>
>@@ -3289,7 +3297,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
>
> /* Return isolated page to tail of freelist. */
> __free_one_page(page, page_to_pfn(page), zone, order, mt,
>- FOP_SKIP_REPORT_NOTIFY);
>+ FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
> }
>
> /*
>--
>2.26.2
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists