lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 15 Mar 2021 09:40:38 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Alexander Duyck <alexander.duyck@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Chuck Lever <chuck.lever@...cle.com>,
        Christoph Hellwig <hch@...radead.org>,
        Matthew Wilcox <willy@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux-Net <netdev@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        Linux-NFS <linux-nfs@...r.kernel.org>, brouer@...hat.com
Subject: Re: [PATCH 7/7] net: page_pool: use alloc_pages_bulk in refill code
 path

On Sat, 13 Mar 2021 13:30:58 +0000
Mel Gorman <mgorman@...hsingularity.net> wrote:

> On Fri, Mar 12, 2021 at 11:44:09AM -0800, Alexander Duyck wrote:
> > > -       /* FUTURE development:
> > > -        *
> > > -        * Current slow-path essentially falls back to single page
> > > -        * allocations, which doesn't improve performance.  This code
> > > -        * need bulk allocation support from the page allocator code.
> > > -        */
> > > -
> > > -       /* Cache was empty, do real allocation */
> > > -#ifdef CONFIG_NUMA
> > > -       page = alloc_pages_node(pool->p.nid, gfp, pool->p.order);
> > > -#else
> > > -       page = alloc_pages(gfp, pool->p.order);
> > > -#endif
> > > -       if (!page)
> > > +       if (unlikely(!__alloc_pages_bulk(gfp, pp_nid, NULL, bulk, &page_list)))
> > >                 return NULL;
> > >
> > > +       /* First page is extracted and returned to caller */
> > > +       first_page = list_first_entry(&page_list, struct page, lru);
> > > +       list_del(&first_page->lru);
> > > +  
> > 
> > This seems kind of broken to me. If you pull the first page and then
> > cannot map it you end up returning NULL even if you placed a number of
> > pages in the cache.
> >   
> 
> I think you're right but I'm punting this to Jesper to fix. He's more
> familiar with this particular code and can verify the performance is
> still ok for high speed networks.

Yes, I'll take a look at this, and updated the patch accordingly (and re-run
the performance tests).

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ