[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a7585f3e-d6c7-4982-8214-63a7ec6258ad@suse.cz>
Date: Thu, 24 Oct 2024 12:56:27 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Petr Tesarik <ptesarik@...e.com>
Cc: Thorsten Leemhuis <regressions@...mhuis.info>,
Rik van Riel <riel@...riel.com>, Matthias <matthias@...enbinder.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux kernel regressions list <regressions@...ts.linux.dev>,
LKML <linux-kernel@...r.kernel.org>, Linux-MM <linux-mm@...ck.org>,
Yang Shi <yang@...amperecomputing.com>
Subject: Re: darktable performance regression on AMD systems caused by "mm:
align larger anonymous mappings on THP boundaries"
On 10/24/24 12:49, Petr Tesarik wrote:
> On Thu, 24 Oct 2024 12:23:48 +0200
> Vlastimil Babka <vbabka@...e.cz> wrote:
>
>> On 10/24/24 11:58, Vlastimil Babka wrote:
>> > On 10/24/24 09:45, Thorsten Leemhuis wrote:
>> >> Hi, Thorsten here, the Linux kernel's regression tracker.
>> >>
>> >> Rik, I noticed a report about a regression in bugzilla.kernel.org that
>> >> appears to be caused by the following change of yours:
>> >>
>> >> efa7df3e3bb5da ("mm: align larger anonymous mappings on THP boundaries")
>> >> [v6.7]
>> >>
>> >> It might be one of those "some things got faster, a few things became
>> >> slower" situations. Not sure. Felt odd that the reporter was able to
>> >> reproduce it on two AMD systems, but not on a Intel system. Maybe there
>> >> is a bug somewhere else that was exposed by this.
>> >
>> > It seems very similar to what we've seen with spec benchmarks such as cactus
>> > and bisected to the same commit:
>> >
>> > https://bugzilla.suse.com/show_bug.cgi?id=1229012
>> >
>> > The exact regression varies per system. Intel regresses too but relatively
>> > less. The theory is that there are many large-ish allocations that don't
>> > have individual sizes aligned to 2MB and would have been merged, commit
>> > efa7df3e3bb5da causes them to become separate areas where each aligns its
>> > start at 2MB boundary and there are gaps between. This (gaps and vma
>> > fragmentation) itself is not great, but most of the problem seemed to be
>> > from the start alignment, which togethter with the access pattern causes
>> > more TLB or cache missess due to limited associtativity.
>> >
>> > So maybe darktable has a similar problem. A simple candidate fix could
>> > change commit efa7df3e3bb5da so that the mapping size has to be a multiple
>> > of THP size (2MB) in order to become aligned, right now it's enough if it's
>> > THP sized or larger.
>>
>> Maybe this could be enough to fix the issue? (on 6.12-rc4)
>
>
> Yes, this should work. I was unsure if thp_get_unmapped_area_vmflags()
> differs in other ways from mm_get_unmapped_area_vmflags(), which might
> still be relevant. I mean, does mm_get_unmapped_area_vmflags() also
> prefer to allocate THPs if the virtual memory block is large enough?
Well any sufficiently large area spanning a PMD aligned/sized block (either
a result of a single allocation or merging of several allocations) can
become populated by THPs (at least in those aligned blocks), and the
preference depends on system-wide THP settings and madvise(MADV_HUGEPAGE) or
prctl.
But mm_get_unmapped_area_vmflags() will AFAIK not try to align the area to
PMD size like the thp_ version would, even if the request is large enough.
> Petr T
>
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 9c0fb43064b5..a5297cfb1dfc 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -900,7 +900,8 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
>>
>> if (get_area) {
>> addr = get_area(file, addr, len, pgoff, flags);
>> - } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
>> + } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)
>> + && IS_ALIGNED(len, PMD_SIZE)) {
>> /* Ensures that larger anonymous mappings are THP aligned. */
>> addr = thp_get_unmapped_area_vmflags(file, addr, len,
>> pgoff, flags, vm_flags);
>>
>
Powered by blists - more mailing lists