[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe0df827-e9d0-ec92-f4e1-99cfc6a6b9e9@suse.cz>
Date: Fri, 3 Jul 2020 18:18:47 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: js1304@...il.com, Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-team@....com, Christoph Hellwig <hch@...radead.org>,
Roman Gushchin <guro@...com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Michal Hocko <mhocko@...e.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v3 8/8] mm/page_alloc: remove a wrapper for
alloc_migration_target()
On 6/23/20 8:13 AM, js1304@...il.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@....com>
>
> There is a well-defined standard migration target callback.
> Use it directly.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
But you could move this to patch 5/8 to reduce churn. And do the same with
mm/memory-failure.c new_page() there really, to drop the simple wrappers. Only
new_node_page() is complex enough.
Hm wait, new_node_page() is only called by do_migrate_range() which is only
called by __offline_pages() with explicit test that all pages are from a single
zone, so the nmask could also be setup just once and not per each page, making
it possible to remove the wrapper.
But for new_page() you would have to define that mtc->nid == NUMA_NO_NODE means
alloc_migrate_target() does page_to_nid(page) by itself.
> ---
> mm/page_alloc.c | 9 +++++++--
> mm/page_isolation.c | 11 -----------
> 2 files changed, 7 insertions(+), 13 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9808339..884dfb5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8359,6 +8359,11 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> unsigned long pfn = start;
> unsigned int tries = 0;
> int ret = 0;
> + struct migration_target_control mtc = {
> + .nid = zone_to_nid(cc->zone),
> + .nmask = &node_states[N_MEMORY],
> + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> + };
>
> migrate_prep();
>
> @@ -8385,8 +8390,8 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> &cc->migratepages);
> cc->nr_migratepages -= nr_reclaimed;
>
> - ret = migrate_pages(&cc->migratepages, alloc_migrate_target,
> - NULL, 0, cc->mode, MR_CONTIG_RANGE);
> + ret = migrate_pages(&cc->migratepages, alloc_migration_target,
> + NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE);
> }
> if (ret < 0) {
> putback_movable_pages(&cc->migratepages);
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index adba031..242c031 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -306,14 +306,3 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
>
> return pfn < end_pfn ? -EBUSY : 0;
> }
> -
> -struct page *alloc_migrate_target(struct page *page, unsigned long private)
> -{
> - struct migration_target_control mtc = {
> - .nid = page_to_nid(page),
> - .nmask = &node_states[N_MEMORY],
> - .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> - };
> -
> - return alloc_migration_target(page, (unsigned long)&mtc);
> -}
>
Powered by blists - more mailing lists