[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230302232150.vvmszlrdzqm5ndjq@google.com>
Date: Thu, 2 Mar 2023 15:21:50 -0800
From: Zach O'Keefe <zokeefe@...gle.com>
To: Yang Shi <shy828301@...il.com>
Cc: Peter Xu <peterx@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
David Stevens <stevensd@...omium.org>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem
charge errors
On Feb 22 14:53, Yang Shi wrote:
> On Wed, Feb 22, 2023 at 11:52 AM Peter Xu <peterx@...hat.com> wrote:
> >
> > If memory charge failed, instead of returning the hpage but with an error,
> > allow the function to cleanup the folio properly, which is normally what a
> > function should do in this case - either return successfully, or return
> > with no side effect of partial runs with an indicated error.
> >
> > This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
> > with either anon or shmem path (even if it's safe to do so).
>
> Thanks for the cleanup. Reviewed-by: Yang Shi <shy828301@...il.com>
>
> >
> > Cc: Yang Shi <shy828301@...il.com>
> > Reviewed-by: David Stevens <stevensd@...omium.org>
> > Acked-by: Johannes Weiner <hannes@...xchg.org>
> > Signed-off-by: Peter Xu <peterx@...hat.com>
> > ---
> > v1->v2:
> > - Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
> > ---
> > mm/khugepaged.c | 9 ++++++++-
> > 1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 8dbc39896811..941d1c7ea910 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
> > gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> > GFP_TRANSHUGE);
> > int node = hpage_collapse_find_target_node(cc);
> > + struct folio *folio;
> >
> > if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
> > return SCAN_ALLOC_HUGE_PAGE_FAIL;
> > - if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
> > +
> > + folio = page_folio(*hpage);
> > + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
> > + folio_put(folio);
> > + *hpage = NULL;
> > return SCAN_CGROUP_CHARGE_FAIL;
> > + }
> > count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
> > +
> > return SCAN_SUCCEED;
> > }
> >
> > --
> > 2.39.1
> >
>
Thanks, Peter.
Can we also get rid of the unnecessary mem_cgroup_uncharge() calls while we're
at it? Maybe this deserves a separate patch, but after Yang's cleanup of the
!NUMA case (where we would preallocate a hugepage) we can depend on put_page()
do take care of that for us.
Regardless, can have my
Reviewed-by: Zach O'Keefe <zokeefe@...gle.com>
Powered by blists - more mailing lists