[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8f1e0a5a10df71a8b4e8856feefd256bb150c38a.camel@bodenbinder.de>
Date: Thu, 24 Oct 2024 13:20:47 +0200
From: Matthias Bodenbinder <matthias@...enbinder.de>
To: Vlastimil Babka <vbabka@...e.cz>, Thorsten Leemhuis
<regressions@...mhuis.info>, Rik van Riel <riel@...riel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Linux kernel regressions list
<regressions@...ts.linux.dev>, LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>, Yang Shi <yang@...amperecomputing.com>,
Petr Tesarik <ptesarik@...e.com>
Subject: Re: darktable performance regression on AMD systems caused by "mm:
align larger anonymous mappings on THP boundaries"
Am Donnerstag, dem 24.10.2024 um 12:23 +0200 schrieb Vlastimil Babka:
> On 10/24/24 11:58, Vlastimil Babka wrote:
> > On 10/24/24 09:45, Thorsten Leemhuis wrote:
> > > Hi, Thorsten here, the Linux kernel's regression tracker.
> > >
> > > Rik, I noticed a report about a regression in bugzilla.kernel.org that
> > > appears to be caused by the following change of yours:
> > >
> > > efa7df3e3bb5da ("mm: align larger anonymous mappings on THP boundaries")
> > > [v6.7]
> > >
> > > It might be one of those "some things got faster, a few things became
> > > slower" situations. Not sure. Felt odd that the reporter was able to
> > > reproduce it on two AMD systems, but not on a Intel system. Maybe there
> > > is a bug somewhere else that was exposed by this.
> >
> > It seems very similar to what we've seen with spec benchmarks such as cactus
> > and bisected to the same commit:
> >
> > https://bugzilla.suse.com/show_bug.cgi?id=1229012
> >
> > The exact regression varies per system. Intel regresses too but relatively
> > less. The theory is that there are many large-ish allocations that don't
> > have individual sizes aligned to 2MB and would have been merged, commit
> > efa7df3e3bb5da causes them to become separate areas where each aligns its
> > start at 2MB boundary and there are gaps between. This (gaps and vma
> > fragmentation) itself is not great, but most of the problem seemed to be
> > from the start alignment, which togethter with the access pattern causes
> > more TLB or cache missess due to limited associtativity.
> >
> > So maybe darktable has a similar problem. A simple candidate fix could
> > change commit efa7df3e3bb5da so that the mapping size has to be a multiple
> > of THP size (2MB) in order to become aligned, right now it's enough if it's
> > THP sized or larger.
>
> Maybe this could be enough to fix the issue? (on 6.12-rc4)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 9c0fb43064b5..a5297cfb1dfc 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -900,7 +900,8 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned
> long len,
>
> if (get_area) {
> addr = get_area(file, addr, len, pgoff, flags);
> - } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
> + } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)
> + && IS_ALIGNED(len, PMD_SIZE)) {
> /* Ensures that larger anonymous mappings are THP aligned. */
> addr = thp_get_unmapped_area_vmflags(file, addr, len,
> pgoff, flags, vm_flags);
>
Hi,
here is Matthias, the reporter of the darktable issue
https://bugzilla.kernel.org/show_bug.cgi?id=219366
I applied your patch to kernel 6.11.5 and it works. darktable pixel pipeline goes down to
3.8 s with this patch. Same performance as with kernel 6.6.x. It was 4.7 s without that
patch.
Powered by blists - more mailing lists