[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210325125001.GW1719932@casper.infradead.org>
Date: Thu, 25 Mar 2021 12:50:01 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Chuck Lever <chuck.lever@...cle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Christoph Hellwig <hch@...radead.org>,
Alexander Duyck <alexander.duyck@...il.com>,
Vlastimil Babka <vbabka@...e.cz>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux-Net <netdev@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
Linux-NFS <linux-nfs@...r.kernel.org>
Subject: Re: [PATCH 0/9 v6] Introduce a bulk order-0 page allocator with two
in-tree users
On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> This series introduces a bulk order-0 page allocator with sunrpc and
> the network page pool being the first users. The implementation is not
> efficient as semantics needed to be ironed out first. If no other semantic
> changes are needed, it can be made more efficient. Despite that, this
> is a performance-related for users that require multiple pages for an
> operation without multiple round-trips to the page allocator. Quoting
> the last patch for the high-speed networking use-case
>
> Kernel XDP stats CPU pps Delta
> Baseline XDP-RX CPU total 3,771,046 n/a
> List XDP-RX CPU total 3,940,242 +4.49%
> Array XDP-RX CPU total 4,249,224 +12.68%
>
> >From the SUNRPC traces of svc_alloc_arg()
>
> Single page: 25.007 us per call over 532,571 calls
> Bulk list: 6.258 us per call over 517,034 calls
> Bulk array: 4.590 us per call over 517,442 calls
>
> Both potential users in this series are corner cases (NFS and high-speed
> networks) so it is unlikely that most users will see any benefit in the
> short term. Other potential other users are batch allocations for page
> cache readahead, fault around and SLUB allocations when high-order pages
> are unavailable. It's unknown how much benefit would be seen by converting
> multiple page allocation calls to a single batch or what difference it may
> make to headline performance.
We have a third user, vmalloc(), with a 16% perf improvement. I know the
email says 21% but that includes the 5% improvement from switching to
kvmalloc() to allocate area->pages.
https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/
I don't know how many _frequent_ vmalloc users we have that will benefit
from this, but it's probably more than will benefit from improvements
to 200Gbit networking performance.
Powered by blists - more mailing lists