[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c7411175b332f3befb5bebb6a75c7b91f2c1dbbc.camel@amazon.co.uk>
Date: Mon, 24 Nov 2025 18:01:35 +0000
From: "Stamatis, Ilias" <ilstam@...zon.co.uk>
To: "Stamatis, Ilias" <ilstam@...zon.co.uk>, "akpm@...ux-foundation.org"
<akpm@...ux-foundation.org>
CC: "nadav.amit@...il.com" <nadav.amit@...il.com>, "david@...nel.org"
<david@...nel.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>,
"andriy.shevchenko@...ux.intel.com" <andriy.shevchenko@...ux.intel.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"huang.ying.caritas@...il.com" <huang.ying.caritas@...il.com>,
"bhe@...hat.com" <bhe@...hat.com>, "nh-open-source@...zon.com"
<nh-open-source@...zon.com>
Subject: Re: [PATCH] Reinstate "resource: avoid unnecessary lookups in
find_next_iomem_res()"
On Mon, 2025-11-24 at 08:58 -0800, Andrew Morton wrote:
> On Mon, 24 Nov 2025 16:53:49 +0000 Ilias Stamatis <ilstam@...zon.com> wrote:
>
> > Commit 97523a4edb7b ("kernel/resource: remove first_lvl / siblings_only
> > logic") removed an optimization introduced by commit 756398750e11
> > ("resource: avoid unnecessary lookups in find_next_iomem_res()"). That
> > was not called out in the message of the first commit explicitly so it's
> > not entirely clear whether removing the optimization happened
> > inadvertently or not.
> >
> > As the original commit message of the optimization explains there is no
> > point considering the children of a subtree in find_next_iomem_res() if
> > the top level range does not match. Reinstating the optimization results
> > in significant performance improvements in systems with very large iomem
> > maps when mmaping /dev/mem.
>
> It would be great if we could quantify "significant performance
> improvements"?
Hi Andrew and Andy,
You are right to call that out and apologies for leaving it vague.
I've done my testing with older kernel versions in systems where `wc -l
/proc/iomem` can return ~5k. In that environment I see mmaping parts of
/dev/mem taking 700-1500μs without the optimisation and 10-50μs with the
optimisation.
The real-world use case we care about is hypervisor live update where having to
do lots of these mmaps() serially can significantly affect the guest downtime
if the cost is 20-30x.
> It also would be good to know which exact function(s) is a bottleneck.
Perf tracing shows that ~95% of CPU time is spent in find_next_iomem_res(), the
full call stack being:
find_next_iomem_res+0x3b ([kernel.kallsyms])
walk_system_ram_range+0x98 ([kernel.kallsyms])
pat_pagerange_is_ram+0x6e ([kernel.kallsyms])
reserve_pfn_range+0x47 ([kernel.kallsyms])
track_pfn_remap+0xb6 ([kernel.kallsyms])
remap_pfn_range+0x3b ([kernel.kallsyms])
mmap_mem+0x9e ([kernel.kallsyms])
mm_struct_mmap_region+0x1f3 ([kernel.kallsyms])
mmap_region+0xa3 ([kernel.kallsyms])
do_mmap+0x3ea ([kernel.kallsyms])
vm_mmap_pgoff+0xa2 ([kernel.kallsyms])
ksys_mmap_pgoff+0xec ([kernel.kallsyms])
do_syscall_64+0x29 ([kernel.kallsyms])
entry_SYSCALL_64_after_hwframe+0x4b ([kernel.kallsyms])
Thanks,
Ilias
Powered by blists - more mailing lists