[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7cfbcde0-9d17-0a89-49ae-942a80c63feb@redhat.com>
Date: Tue, 6 Dec 2022 10:43:05 +0100
From: Jesper Dangaard Brouer <jbrouer@...hat.com>
To: Matthew Wilcox <willy@...radead.org>,
Jesper Dangaard Brouer <jbrouer@...hat.com>
Cc: brouer@...hat.com, Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
netdev@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 00/24] Split page pools from struct page
On 05/12/2022 17.31, Matthew Wilcox wrote:
> On Mon, Dec 05, 2022 at 04:34:10PM +0100, Jesper Dangaard Brouer wrote:
>> I have a micro-benchmark [1][2], that I want to run on this patchset.
>> Reducing the asm code 'text' size is less likely to improve a
>> microbenchmark. The 100Gbit mlx5 driver uses page_pool, so perhaps I can
>> run a packet benchmark that can show the (expected) performance improvement.
>>
>> [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_simple.c
>> [2] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_cross_cpu.c
>
> Appreciate it! I'm not expecting any performance change outside noise,
> but things do surprise me. I'd appreciate it if you'd test with a
> "distro" config, ie enabling CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP so
> we show the most expensive case.
>
I have CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y BUT it isn't default
runtime enabled.
Should I also choose CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON
or enable it via sysctl ?
$ grep -H . /proc/sys/vm/hugetlb_optimize_vmemmap
/proc/sys/vm/hugetlb_optimize_vmemmap:0
--Jesper
Powered by blists - more mailing lists