[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAmzW4ObN=GAzCLw8betLftTeCEsLs4OihfNXvtg4CaWyWiBCw@mail.gmail.com>
Date: Fri, 26 Jun 2020 14:02:49 +0900
From: Joonsoo Kim <js1304@...il.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, kernel-team@....com,
Vlastimil Babka <vbabka@...e.cz>,
Christoph Hellwig <hch@...radead.org>,
Roman Gushchin <guro@...com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v3 5/8] mm/migrate: make a standard migration target
allocation function
2020년 6월 25일 (목) 오후 9:05, Michal Hocko <mhocko@...nel.org>님이 작성:
>
> On Tue 23-06-20 15:13:45, Joonsoo Kim wrote:
> > From: Joonsoo Kim <iamjoonsoo.kim@....com>
> >
> > There are some similar functions for migration target allocation. Since
> > there is no fundamental difference, it's better to keep just one rather
> > than keeping all variants. This patch implements base migration target
> > allocation function. In the following patches, variants will be converted
> > to use this function.
> >
> > Note that PageHighmem() call in previous function is changed to open-code
> > "is_highmem_idx()" since it provides more readability.
>
> I was little bit surprised that alloc_migration_target still uses
> private argument while it only accepts migration_target_control
> structure pointer but then I have noticed that you are using it as a
> real callback in a later patch.
>
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
>
> Few questions inline
> [...]
>
> > diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> > index 47b8ccb..820ea5e 100644
> > --- a/mm/memory-failure.c
> > +++ b/mm/memory-failure.c
> > @@ -1648,9 +1648,13 @@ EXPORT_SYMBOL(unpoison_memory);
> >
> > static struct page *new_page(struct page *p, unsigned long private)
> > {
> > - int nid = page_to_nid(p);
> > + struct migration_target_control mtc = {
> > + .nid = page_to_nid(p),
> > + .nmask = &node_states[N_MEMORY],
>
> This could be .namsk = NULL, right? alloc_migration_target doesn't
> modify the node mask and NULL nodemask is always interpreted as all
> available nodes.
Will do.
> > + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> > + };
> >
> > - return new_page_nodemask(p, nid, &node_states[N_MEMORY]);
> > + return alloc_migration_target(p, (unsigned long)&mtc);
> > }
> >
> [...]
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 634f1ea..3afff59 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -1536,29 +1536,34 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> > return rc;
> > }
> >
> > -struct page *new_page_nodemask(struct page *page,
> > - int preferred_nid, nodemask_t *nodemask)
> > +struct page *alloc_migration_target(struct page *page, unsigned long private)
> > {
> > - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL;
> > + struct migration_target_control *mtc;
> > + gfp_t gfp_mask;
> > unsigned int order = 0;
> > struct page *new_page = NULL;
> > + int zidx;
> > +
> > + mtc = (struct migration_target_control *)private;
> > + gfp_mask = mtc->gfp_mask;
> >
> > if (PageHuge(page)) {
> > return alloc_huge_page_nodemask(
> > - page_hstate(compound_head(page)),
> > - preferred_nid, nodemask, 0, false);
> > + page_hstate(compound_head(page)), mtc->nid,
> > + mtc->nmask, gfp_mask, false);
> > }
> >
> > if (PageTransHuge(page)) {
> > + gfp_mask &= ~__GFP_RECLAIM;
>
> What's up with this gfp_mask modification?
THP page allocation uses a standard gfp masks, GFP_TRANSHUGE_LIGHT and
GFP_TRANHUGE. __GFP_RECLAIM flags is a big part of this standard mask design.
So, I clear it here so as not to disrupt the THP gfp mask.
> > gfp_mask |= GFP_TRANSHUGE;
> > order = HPAGE_PMD_ORDER;
> > }
> > -
> > - if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE))
> > + zidx = zone_idx(page_zone(page));
> > + if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE)
> > gfp_mask |= __GFP_HIGHMEM;
> >
> > new_page = __alloc_pages_nodemask(gfp_mask, order,
> > - preferred_nid, nodemask);
> > + mtc->nid, mtc->nmask);
> >
> > if (new_page && PageTransHuge(new_page))
> > prep_transhuge_page(new_page);
> > diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> > index aec26d9..adba031 100644
> > --- a/mm/page_isolation.c
> > +++ b/mm/page_isolation.c
> > @@ -309,7 +309,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
> >
> > struct page *alloc_migrate_target(struct page *page, unsigned long private)
> > {
> > - int nid = page_to_nid(page);
> > + struct migration_target_control mtc = {
> > + .nid = page_to_nid(page),
> > + .nmask = &node_states[N_MEMORY],
>
> nmask = NULL again
Okay.
Thanks.
Powered by blists - more mailing lists