[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGWkznH+89v1cDn6PxE-cZ97jnn+QPkuCQHu1ujc-3=c0iVdKw@mail.gmail.com>
Date: Sat, 6 May 2023 10:44:28 +0800
From: Zhaoyang Huang <huangzhaoyang@...il.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: "zhaoyang.huang" <zhaoyang.huang@...soc.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <guro@...com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, ke.wang@...soc.com
Subject: Re: [PATCHv2] mm: optimization on page allocation when CMA enabled
On Sat, May 6, 2023 at 6:29 AM Roman Gushchin <roman.gushchin@...ux.dev> wrote:
>
> On Thu, May 04, 2023 at 06:09:54PM +0800, zhaoyang.huang wrote:
> > From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> >
> > Let us look at the series of scenarios below with WMARK_LOW=25MB,WMARK_MIN=5MB
> > (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start to use
> > CMA since C which actually has caused U&R lower than WMARK_LOW (this should be
> > deemed as against current memory policy, that is, U&R should either stay around
> > WATERMARK_LOW when no allocation or do reclaim via enter slowpath)
> >
> > free_cma/free_pages(MB) A(12/30) B(12/25) C(12/20)
> > fixed 1/2 ratio N N Y
> > this commit Y Y Y
> >
> > Suggested-by: Roman Gushchin <roman.gushchin@...ux.dev>
>
> I didn't suggest it in this form, please, drop this tag.
>
> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> > ---
> > v2: do proportion check when zone_watermark_ok, update commit message
> > ---
> > ---
> > mm/page_alloc.c | 36 ++++++++++++++++++++++++++++++++----
> > 1 file changed, 32 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 0745aed..d0baeab 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3071,6 +3071,34 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> >
> > }
> >
> > +#ifdef CONFIG_CMA
> > +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags)
> > +{
> > + unsigned long cma_proportion = 0;
> > + unsigned long cma_free_proportion = 0;
> > + unsigned long watermark = 0;
> > + long count = 0;
> > + bool cma_first = false;
> > +
> > + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);
> > + /*check if GFP_MOVABLE pass previous watermark check via the help of CMA*/
> > + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA)))
> > + /* WMARK_LOW failed lead to using cma first, this helps U&R stay
> > + * around low when being drained by GFP_MOVABLE
> > + */
> > + cma_first = true;
>
> This part looks reasonable to me.
>
> > + else {
> > + /*check proportion when zone_watermark_ok*/
> > + count = atomic_long_read(&zone->managed_pages);
> > + cma_proportion = zone->cma_pages * 100 / count;
> > + cma_free_proportion = zone_page_state(zone, NR_FREE_CMA_PAGES) * 100
> > + / zone_page_state(zone, NR_FREE_PAGES);
> > + cma_first = (cma_free_proportion >= cma_proportion * 2
>
> Why *2? Please, explain.
It is a magic number here which aims at avoiding late use of cma when
free pages near to WMARK_LOW by periodically using them in advance.
>
> > + || cma_free_proportion >= 50);
>
> It will heavily boost the use of cma at early stages of uptime, when there is a lot of !cma
> memory, making continuous (e.g. hugetlb) allocations fail more often. Not a good idea.
Actually, it is equal to "zone_page_state(zone, NR_FREE_CMA_PAGES) >
zone_page_state(zone, NR_FREE_PAGES) / 2"
>
> Thanks!
Powered by blists - more mailing lists