lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210312145814.GA2577561@casper.infradead.org>
Date:   Fri, 12 Mar 2021 14:58:14 +0000
From:   Matthew Wilcox <willy@...radead.org>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Mel Gorman <mgorman@...hsingularity.net>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Chuck Lever <chuck.lever@...cle.com>,
        Christoph Hellwig <hch@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux-Net <netdev@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        Linux-NFS <linux-nfs@...r.kernel.org>
Subject: Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator

On Fri, Mar 12, 2021 at 12:46:09PM +0100, Jesper Dangaard Brouer wrote:
> In my page_pool patch I'm bulk allocating 64 pages. I wanted to ask if
> this is too much? (PP_ALLOC_CACHE_REFILL=64).
> 
> The mlx5 driver have a while loop for allocation 64 pages, which it
> used in this case, that is why 64 is chosen.  If we choose a lower
> bulk number, then the bulk-alloc will just be called more times.

The thing about batching is that smaller batches are often better.
Let's suppose you need to allocate 100 pages for something, and the page
allocator takes up 90% of your latency budget.  Batching just ten pages
at a time is going to reduce the overhead to 9%.  Going to 64 pages
reduces the overhead from 9% to 2% -- maybe that's important, but
possibly not.

> The result of the API is to deliver pages as a double-linked list via
> LRU (page->lru member).  If you are planning to use llist, then how to
> handle this API change later?
> 
> Have you notice that the two users store the struct-page pointers in an
> array?  We could have the caller provide the array to store struct-page
> pointers, like we do with kmem_cache_alloc_bulk API.

My preference would be for a pagevec.  That does limit you to 15 pages
per call [1], but I do think that might be enough.  And the overhead of
manipulating a linked list isn't free.

[1] patches exist to increase this, because it turns out that 15 may
not be enough for all systems!  but it would limit to 255 as an absolute
hard cap.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ