[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210310154704.9389055d0be891a0c3549cc2@linux-foundation.org>
Date: Wed, 10 Mar 2021 15:47:04 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Chuck Lever <chuck.lever@...cle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Christoph Hellwig <hch@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux-Net <netdev@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
Linux-NFS <linux-nfs@...r.kernel.org>
Subject: Re: [PATCH 0/5] Introduce a bulk order-0 page allocator with two
in-tree users
On Wed, 10 Mar 2021 10:46:13 +0000 Mel Gorman <mgorman@...hsingularity.net> wrote:
> This series introduces a bulk order-0 page allocator with sunrpc and
> the network page pool being the first users.
<scratches head>
Right now, the [0/n] doesn't even tell us that it's a performance
patchset!
The whole point of this patchset appears to appear in the final paragraph
of the final patch's changelog.
: For XDP-redirect workload with 100G mlx5 driver (that use page_pool)
: redirecting xdp_frame packets into a veth, that does XDP_PASS to create
: an SKB from the xdp_frame, which then cannot return the page to the
: page_pool. In this case, we saw[1] an improvement of 18.8% from using
: the alloc_pages_bulk API (3,677,958 pps -> 4,368,926 pps).
Much more detail on the overall objective and the observed results,
please?
Also, that workload looks awfully corner-casey. How beneficial is this
work for more general and widely-used operations?
> The implementation is not
> particularly efficient and the intention is to iron out what the semantics
> of the API should have for users. Once the semantics are ironed out, it can
> be made more efficient.
And some guesstimates about how much benefit remains to be realized
would be helpful.
Powered by blists - more mailing lists