[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210325140657.GA1908@pc638.lan>
Date: Thu, 25 Mar 2021 15:06:57 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>,
Chuck Lever <chuck.lever@...cle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Christoph Hellwig <hch@...radead.org>,
Alexander Duyck <alexander.duyck@...il.com>,
Vlastimil Babka <vbabka@...e.cz>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux-Net <netdev@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
Linux-NFS <linux-nfs@...r.kernel.org>
Subject: Re: [PATCH 0/9 v6] Introduce a bulk order-0 page allocator with two
in-tree users
> On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote:
> > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> > > This series introduces a bulk order-0 page allocator with sunrpc and
> > > the network page pool being the first users. The implementation is not
> > > efficient as semantics needed to be ironed out first. If no other semantic
> > > changes are needed, it can be made more efficient. Despite that, this
> > > is a performance-related for users that require multiple pages for an
> > > operation without multiple round-trips to the page allocator. Quoting
> > > the last patch for the high-speed networking use-case
> > >
> > > Kernel XDP stats CPU pps Delta
> > > Baseline XDP-RX CPU total 3,771,046 n/a
> > > List XDP-RX CPU total 3,940,242 +4.49%
> > > Array XDP-RX CPU total 4,249,224 +12.68%
> > >
> > > >From the SUNRPC traces of svc_alloc_arg()
> > >
> > > Single page: 25.007 us per call over 532,571 calls
> > > Bulk list: 6.258 us per call over 517,034 calls
> > > Bulk array: 4.590 us per call over 517,442 calls
> > >
> > > Both potential users in this series are corner cases (NFS and high-speed
> > > networks) so it is unlikely that most users will see any benefit in the
> > > short term. Other potential other users are batch allocations for page
> > > cache readahead, fault around and SLUB allocations when high-order pages
> > > are unavailable. It's unknown how much benefit would be seen by converting
> > > multiple page allocation calls to a single batch or what difference it may
> > > make to headline performance.
> >
> > We have a third user, vmalloc(), with a 16% perf improvement. I know the
> > email says 21% but that includes the 5% improvement from switching to
> > kvmalloc() to allocate area->pages.
> >
> > https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/
> >
>
> That's fairly promising. Assuming the bulk allocator gets merged, it would
> make sense to add vmalloc on top. That's for bringing it to my attention
> because it's far more relevant than my imaginary potential use cases.
>
For the vmalloc we should be able to allocating on a specific NUMA node,
at least the current interface takes it into account. As far as i see
the current interface allocate on a current node:
static inline unsigned long
alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
{
return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array);
}
Or am i missing something?
--
Vlad Rezki
Powered by blists - more mailing lists