[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210524091220.GC30378@techsingularity.net>
Date: Mon, 24 May 2021 10:12:20 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Linux-MM <linux-mm@...ck.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...nel.org>,
Nicholas Piggin <npiggin@...il.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/6] mm/page_alloc: Scale the number of pages that are
batch freed
On Fri, May 21, 2021 at 03:36:05PM -0700, Dave Hansen wrote:
> ...
> > +static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch)
> > +{
> > + int min_nr_free, max_nr_free;
> > +
> > + /* Check for PCP disabled or boot pageset */
> > + if (unlikely(high < batch))
> > + return 1;
> > +
> > + min_nr_free = batch;
> > + max_nr_free = high - batch;
>
> I puzzled over this for a minute. I *think* it means to say: "Leave at
> least one batch worth of pages in the pcp at all times so that the next
> allocation can still be satisfied from this pcp."
>
Yes, I added a comment.
> > + batch <<= pcp->free_factor;
> > + if (batch < max_nr_free)
> > + pcp->free_factor++;
> > + batch = clamp(batch, min_nr_free, max_nr_free);
> > +
> > + return batch;
> > +}
> > +
> > static void free_unref_page_commit(struct page *page, unsigned long pfn,
> > int migratetype)
> > {
> > struct zone *zone = page_zone(page);
> > struct per_cpu_pages *pcp;
> > + int high;
> >
> > __count_vm_event(PGFREE);
> > pcp = this_cpu_ptr(zone->per_cpu_pageset);
> > list_add(&page->lru, &pcp->lists[migratetype]);
> > pcp->count++;
> > - if (pcp->count >= READ_ONCE(pcp->high))
> > - free_pcppages_bulk(zone, READ_ONCE(pcp->batch), pcp);
> > + high = READ_ONCE(pcp->high);
> > + if (pcp->count >= high) {
> > + int batch = READ_ONCE(pcp->batch);
> > +
> > + free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch), pcp);
> > + }
> > }
> >
> > /*
> > @@ -3531,6 +3555,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
> >
> > local_lock_irqsave(&pagesets.lock, flags);
> > pcp = this_cpu_ptr(zone->per_cpu_pageset);
> > + pcp->free_factor >>= 1;
> > list = &pcp->lists[migratetype];
> > page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list);
> > local_unlock_irqrestore(&pagesets.lock, flags);
>
> A high-level description of the algorithm in the changelog would also be
> nice. I *think* it's basically:
>
> After hitting the high pcp mark, free one pcp->batch at a time. But, as
> subsequent pcp free operations occur, keep doubling the size of the
> freed batches. Cap them so that they always leave at least one
> pcp->batch worth of pages. Scale the size back down by half whenever an
> allocation that consumes a page from the pcp occurs.
>
> While I'd appreciate another comment or two, I do think this is worth
> doing, and the approach seems sound:
>
> Acked-by: Dave Hansen <dave.hansen@...ux.intel.com>
Thanks, I added a few additional comments.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists