[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a34eee7e-3970-4cdd-8c09-bca51132db50@arm.com>
Date: Wed, 31 Jan 2024 15:02:56 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>, Russell King <linux@...linux.org.uk>,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Dinh Nguyen <dinguyen@...nel.org>, Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
"Aneesh Kumar K.V" <aneesh.kumar@...nel.org>,
"Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>,
Paul Walmsley <paul.walmsley@...ive.com>, Palmer Dabbelt
<palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>, "David S. Miller"
<davem@...emloft.net>, linux-arm-kernel@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, linux-riscv@...ts.infradead.org,
linux-s390@...r.kernel.org, sparclinux@...r.kernel.org
Subject: Re: [PATCH v3 00/15] mm/memory: optimize fork() with PTE-mapped THP
On 31/01/2024 14:29, David Hildenbrand wrote:
>>> Note that regarding NUMA effects, I mean when some memory access within the same
>>> socket is faster/slower even with only a single node. On AMD EPYC that's
>>> possible, depending on which core you are running and on which memory controller
>>> the memory you want to access is located. If both are in different quadrants
>>> IIUC, the access latency will be different.
>>
>> I've configured the NUMA to only bring the RAM and CPUs for a single socket
>> online, so I shouldn't be seeing any of these effects. Anyway, I've been using
>> the Altra as a secondary because its so much slower than the M2. Let me move
>> over to it and see if everything looks more straightforward there.
>
> Better use a system where people will actually run Linux production workloads
> on, even if it is slower :)
>
> [...]
>
>>>>
>>>> I'll continue to mess around with it until the end of the day. But I'm not
>>>> making any headway, then I'll change tack; I'll just measure the performance of
>>>> my contpte changes using your fork/zap stuff as the baseline and post based on
>>>> that.
>>>
>>> You should likely not focus on M2 results. Just pick a representative bare metal
>>> machine where you get consistent, explainable results.
>>>
>>> Nothing in the code is fine-tuned for a particular architecture so far, only
>>> order-0 handling is kept separate.
>>>
>>> BTW: I see the exact same speedups for dontneed that I see for munmap. For
>>> example, for order-9, it goes from 0.023412s -> 0.009785, so -58%. So I'm
>>> curious why you see a speedup for munmap but not for dontneed.
>>
>> Ugh... ok, coming up.
>
> Hopefully you were just staring at the wrong numbers (e.g., only with fork
> patches). Because both (munmap/pte-dontneed) are using the exact same code path.
>
Ahh... I'm doing pte-dontneed, which is the only option in your original
benchmark - it does MADV_DONTNEED one page at a time. It looks like your new
benchmark has an additional "dontneed" option that does it in one shot. Which
option are you running? Assuming the latter, I think that explains it.
Powered by blists - more mailing lists