[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20210222214420.1341e50f@carbon>
Date: Mon, 22 Feb 2021 21:44:20 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Chuck Lever <chuck.lever@...cle.com>, Mel Gorman <mgorman@...e.de>,
Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, brouer@...hat.com,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: alloc_pages_bulk()
On Mon, 15 Feb 2021 12:06:09 +0000
Mel Gorman <mgorman@...hsingularity.net> wrote:
> On Thu, Feb 11, 2021 at 04:20:31PM +0000, Chuck Lever wrote:
> > > On Feb 11, 2021, at 4:12 AM, Mel Gorman <mgorman@...hsingularity.net> wrote:
> > >
> > > <SNIP>
> > >
> > > Parameters to __rmqueue_pcplist are garbage as the parameter order changed.
> > > I'm surprised it didn't blow up in a spectacular fashion. Again, this
> > > hasn't been near any testing and passing a list with high orders to
> > > free_pages_bulk() will corrupt lists too. Mostly it's a curiousity to see
> > > if there is justification for reworking the allocator to fundamentally
> > > deal in batches and then feed batches to pcp lists and the bulk allocator
> > > while leaving the normal GFP API as single page "batches". While that
> > > would be ideal, it's relatively high risk for regressions. There is still
> > > some scope for adding a basic bulk allocator before considering a major
> > > refactoring effort.
> > >
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index f8353ea7b977..8f3fe7de2cf7 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -5892,7 +5892,7 @@ __alloc_pages_bulk_nodemask(gfp_t gfp_mask, unsigned int order,
> > > pcp_list = &pcp->lists[migratetype];
> > >
> > > while (nr_pages) {
> > > - page = __rmqueue_pcplist(zone, gfp_mask, migratetype,
> > > + page = __rmqueue_pcplist(zone, migratetype, alloc_flags,
> > > pcp, pcp_list);
> > > if (!page)
> > > break;
> >
> > The NFS server is considerably more stable now. Thank you!
> >
>
> Thanks for testing!
I've done some testing here:
https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org
Performance summary:
- Before: 3,677,958 pps
- After : 4,066,028 pps
I'll describe/show the page_pool changes tomorrow.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists