[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <161650953543.3977.9991115610287676892.stgit@klimt.1015granger.net>
Date: Tue, 23 Mar 2021 11:09:51 -0400
From: Chuck Lever <chuck.lever@...cle.com>
To: mgorman@...hsingularity.net
Cc: brouer@...hat.com, vbabka@...e.cz, akpm@...ux-foundation.org,
hch@...radead.org, alexander.duyck@...il.com, willy@...radead.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linux-mm@...ck.org, linux-nfs@...r.kernel.org
Subject: [PATCH 0/2] SUNRPC consumer for the bulk page allocator
This patch set and the measurements below are based on yesterday's
bulk allocator series:
git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v5r9
The patches change SUNRPC to invoke the array-based bulk allocator
instead of alloc_page().
The micro-benchmark results are promising. I ran a mixture of 256KB
reads and writes over NFSv3. The server's kernel is built with KASAN
enabled, so the comparison is exaggerated but I believe it is still
valid.
I instrumented svc_recv() to measure the latency of each call to
svc_alloc_arg() and report it via a trace point. The following
results are averages across the trace events.
Single page: 25.007 us per call over 532,571 calls
Bulk list: 6.258 us per call over 517,034 calls
Bulk array: 4.590 us per call over 517,442 calls
For SUNRPC, the simplicity and better performance of the array-based
API makes it superior to the list-based API.
---
Chuck Lever (2):
SUNRPC: Set rq_page_end differently
SUNRPC: Refresh rq_pages using a bulk page allocator
net/sunrpc/svc_xprt.c | 33 +++++++++++++++++----------------
1 file changed, 17 insertions(+), 16 deletions(-)
--
Chuck Lever
Powered by blists - more mailing lists