[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f3fe7efd74c6011ddb35e1f1e90eba43af864aa4.camel@amazon.co.uk>
Date: Mon, 24 Nov 2025 19:35:31 +0000
From: "Stamatis, Ilias" <ilstam@...zon.co.uk>
To: "andriy.shevchenko@...ux.intel.com" <andriy.shevchenko@...ux.intel.com>
CC: "nadav.amit@...il.com" <nadav.amit@...il.com>, "david@...nel.org"
<david@...nel.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"bhe@...hat.com" <bhe@...hat.com>, "huang.ying.caritas@...il.com"
<huang.ying.caritas@...il.com>, "nh-open-source@...zon.com"
<nh-open-source@...zon.com>
Subject: Re: [PATCH] Reinstate "resource: avoid unnecessary lookups in
find_next_iomem_res()"
On Mon, 2025-11-24 at 20:55 +0200, andriy.shevchenko@...ux.intel.com wrote:
> On Mon, Nov 24, 2025 at 06:01:35PM +0000, Stamatis, Ilias wrote:
> > On Mon, 2025-11-24 at 08:58 -0800, Andrew Morton wrote:
> > > On Mon, 24 Nov 2025 16:53:49 +0000 Ilias Stamatis <ilstam@...zon.com> wrote:
> > >
> > > > Commit 97523a4edb7b ("kernel/resource: remove first_lvl / siblings_only
> > > > logic") removed an optimization introduced by commit 756398750e11
> > > > ("resource: avoid unnecessary lookups in find_next_iomem_res()"). That
> > > > was not called out in the message of the first commit explicitly so it's
> > > > not entirely clear whether removing the optimization happened
> > > > inadvertently or not.
> > > >
> > > > As the original commit message of the optimization explains there is no
> > > > point considering the children of a subtree in find_next_iomem_res() if
> > > > the top level range does not match. Reinstating the optimization results
> > > > in significant performance improvements in systems with very large iomem
> > > > maps when mmaping /dev/mem.
> > >
> > > It would be great if we could quantify "significant performance
> > > improvements"?
> >
> > Hi Andrew and Andy,
> >
> > You are right to call that out and apologies for leaving it vague.
> >
> > I've done my testing with older kernel versions in systems where `wc -l
> > /proc/iomem` can return ~5k. In that environment I see mmaping parts of
> > /dev/mem taking 700-1500μs without the optimisation and 10-50μs with the
> > optimisation.
> >
> > The real-world use case we care about is hypervisor live update where having to
> > do lots of these mmaps() serially can significantly affect the guest downtime
> > if the cost is 20-30x.
>
> Thanks for providing this information.
>
> > > It also would be good to know which exact function(s) is a bottleneck.
> >
> > Perf tracing shows that ~95% of CPU time is spent in find_next_iomem_res(),
>
> Have you investigated possibility to return that check directly into
> the culprit?
>
>
I'm sorry, I don't understand this. Could you please clarify what you mean?
What do you consider to be the culprit and which check do you refer to?
Powered by blists - more mailing lists