[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210415074152.GA61572@shbuild999.sh.intel.com>
Date: Thu, 15 Apr 2021 15:41:52 +0800
From: Feng Tang <feng.tang@...el.com>
To: Michal Hocko <mhocko@...e.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Mike Kravetz <mike.kravetz@...cle.com>,
Randy Dunlap <rdunlap@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Dave Hansen <dave.hansen@...el.com>,
Ben Widawsky <ben.widawsky@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>
Subject: Re: [PATCH v4 11/13] mm/mempolicy: huge-page allocation for many
preferred
Hi Michal,
Many thanks for reviewing the whole patchset! We will check them.
On Wed, Apr 14, 2021 at 03:25:34PM +0200, Michal Hocko wrote:
> Please use hugetlb prefix to make it explicit that this is hugetlb
> related.
>
> On Wed 17-03-21 11:40:08, Feng Tang wrote:
> > From: Ben Widawsky <ben.widawsky@...el.com>
> >
> > Implement the missing huge page allocation functionality while obeying
> > the preferred node semantics.
> >
> > This uses a fallback mechanism to try multiple preferred nodes first,
> > and then all other nodes. It cannot use the helper function that was
> > introduced because huge page allocation already has its own helpers and
> > it was more LOC, and effort to try to consolidate that.
> >
> > The weirdness is MPOL_PREFERRED_MANY can't be called yet because it is
> > part of the UAPI we haven't yet exposed. Instead of make that define
> > global, it's simply changed with the UAPI patch.
> >
> > [ feng: add NOWARN flag, and skip the direct reclaim to speedup allocation
> > in some case ]
> >
> > Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com
> > Signed-off-by: Ben Widawsky <ben.widawsky@...el.com>
> > Signed-off-by: Feng Tang <feng.tang@...el.com>
> > ---
> > mm/hugetlb.c | 26 +++++++++++++++++++++++---
> > mm/mempolicy.c | 3 ++-
> > 2 files changed, 25 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 8fb42c6..9dfbfa3 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1105,7 +1105,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
> > unsigned long address, int avoid_reserve,
> > long chg)
> > {
> > - struct page *page;
> > + struct page *page = NULL;
> > struct mempolicy *mpol;
> > gfp_t gfp_mask;
> > nodemask_t *nodemask;
> > @@ -1126,7 +1126,17 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
> >
> > gfp_mask = htlb_alloc_mask(h);
> > nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
> > - page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
> > + if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */
>
> Please use MPOL_PREFERRED_MANY explicitly here.
>
> > + gfp_t gfp_mask1 = gfp_mask | __GFP_NOWARN;
> > +
> > + gfp_mask1 &= ~__GFP_DIRECT_RECLAIM;
> > + page = dequeue_huge_page_nodemask(h,
> > + gfp_mask1, nid, nodemask);
> > + if (!page)
> > + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL);
> > + } else {
> > + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
> > + }
> > if (page && !avoid_reserve && vma_has_reserves(vma, chg)) {
> > SetHPageRestoreReserve(page);
> > h->resv_huge_pages--;
>
> __GFP_DIRECT_RECLAIM handing is not needed here. dequeue_huge_page_nodemask
> only uses gfp mask to get zone and cpusets constraines. So the above
> should have simply been
> if (mpol->mode == MPOL_PREFERRED_MANY) {
> page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
> if (page)
> goto got_page;
> /* fallback to all nodes */
> nodemask = NULL;
> }
> page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
> got_page:
> if (page ...)
You are right, no need to change the gfp_mask here.
> > @@ -1883,7 +1893,17 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
> > nodemask_t *nodemask;
> >
> > nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask);
> > - page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask);
> > + if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */
> > + gfp_t gfp_mask1 = gfp_mask | __GFP_NOWARN;
> > +
> > + gfp_mask1 &= ~__GFP_DIRECT_RECLAIM;
> > + page = alloc_surplus_huge_page(h,
> > + gfp_mask1, nid, nodemask);
> > + if (!page)
> > + alloc_surplus_huge_page(h, gfp_mask, nid, NULL);
> > + } else {
> > + page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask);
> > + }
>
> And here similar
> if (mpol->mode == MPOL_PREFERRED_MANY) {
> page = alloc_surplus_huge_page(h, (gfp_mask | __GFP_NOWARN) & ~(__GFP_DIRECT_RECLAIM), nodemask);
> if (page)
> goto got_page;
> /* fallback to all nodes */
> nodemask = NULL;
> }
> page = alloc_surplus_huge_page(h, gfp_mask, nodemask);
> got_page:
> > mpol_cond_put(mpol);
>
> You can have a dedicated gfp mask here if you prefer of course but I
> calling out MPOL_PREFERRED_MANY explicitly will make the code easier to
> read.
Will follow. The "if (mpol->mode != MPOL_BIND && nodemask) {
/* AKA MPOL_PREFERRED_MANY *a/ " and "MPOL_MAX + 1" will be replaced
in the 12/13 patch.
Thanks,
Feng
> > return page;
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists