lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c63870b0-690a-4051-b4f5-296cf3b73be2@redhat.com>
Date: Wed, 31 Jan 2024 14:38:09 +0100
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
 Matthew Wilcox <willy@...radead.org>, Russell King <linux@...linux.org.uk>,
 Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 Dinh Nguyen <dinguyen@...nel.org>, Michael Ellerman <mpe@...erman.id.au>,
 Nicholas Piggin <npiggin@...il.com>,
 Christophe Leroy <christophe.leroy@...roup.eu>,
 "Aneesh Kumar K.V" <aneesh.kumar@...nel.org>,
 "Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>,
 Paul Walmsley <paul.walmsley@...ive.com>, Palmer Dabbelt
 <palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>,
 Alexander Gordeev <agordeev@...ux.ibm.com>,
 Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
 Heiko Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
 Christian Borntraeger <borntraeger@...ux.ibm.com>,
 Sven Schnelle <svens@...ux.ibm.com>, "David S. Miller"
 <davem@...emloft.net>, linux-arm-kernel@...ts.infradead.org,
 linuxppc-dev@...ts.ozlabs.org, linux-riscv@...ts.infradead.org,
 linux-s390@...r.kernel.org, sparclinux@...r.kernel.org
Subject: Re: [PATCH v3 00/15] mm/memory: optimize fork() with PTE-mapped THP

>>> Nope: looks the same. I've taken my test harness out of the picture and done
>>> everything manually from the ground up, with the old tests and the new. Headline
>>> is that I see similar numbers from both.
>>
>> I took me a while to get really reproducible numbers on Intel. Most importantly:
>> * Set a fixed CPU frequency, disabling any boost and avoiding any
>>    thermal throttling.
>> * Pin the test to CPUs and set a nice level.
> 
> I'm already pinning the test to cpu 0. But for M2, at least, I'm running in a VM
> on top of macos, and I don't have a mechanism to pin the QEMU threads to the
> physical CPUs. Anyway, I don't think these are problems because for a given
> kernel build I can accurately repro numbers.

Oh, you do have a layer of virtualization in there. I *suspect* that 
might amplify some odd things regarding code layout, caching effects, etc.

I guess especially the fork() benchmark is too sensible (fast) for 
things like that, so I would just focus on bare metal results where you 
can control the environment completely.

Note that regarding NUMA effects, I mean when some memory access within 
the same socket is faster/slower even with only a single node. On AMD 
EPYC that's possible, depending on which core you are running and on 
which memory controller the memory you want to access is located. If 
both are in different quadrants IIUC, the access latency will be different.

>> But yes: I was observing something similar on AMD EPYC, where you get
>> consecutive pages from the buddy, but once you allocate from the PCP it might no
>> longer be consecutive.
>>
>>>    - test is 5-10% slower when output is printed to terminal vs when redirected to
>>>      file. I've always effectively been redirecting. Not sure if this overhead
>>>      could start to dominate the regression and that's why you don't see it?
>>
>> That's weird, because we don't print while measuring? Anyhow, 5/10% variance on
>> some system is not the end of the world.
> 
> I imagine its cache effects? More work to do to print the output could be
> evicting some code that's in the benchmark path?

Maybe. Do you also see these oddities on the bare metal system?

> 
>>
>>>
>>> I'm inclined to run this test for the last N kernel releases and if the number
>>> moves around significantly, conclude that these tests don't really matter.
>>> Otherwise its an exercise in randomly refactoring code until it works well, but
>>> that's just overfitting to the compiler and hw. What do you think?
>>
>> Personally, I wouldn't lose sleep if you see weird, unexplainable behavior on
>> some system (not even architecture!). Trying to optimize for that would indeed
>> be random refactorings.
>>
>> But I would not be so fast to say that "these tests don't really matter" and
>> then go wild and degrade them as much as you want. There are use cases that care
>> about fork performance especially with order-0 pages -- such as Redis.
> 
> Indeed. But also remember that my fork baseline time is ~2.5ms, and I think you
> said yours was 14ms :)

Yes, no idea why M2 is that fast (BTW, which page size? 4k or 16k? ) :)

> 
> I'll continue to mess around with it until the end of the day. But I'm not
> making any headway, then I'll change tack; I'll just measure the performance of
> my contpte changes using your fork/zap stuff as the baseline and post based on that.

You should likely not focus on M2 results. Just pick a representative 
bare metal machine where you get consistent, explainable results.

Nothing in the code is fine-tuned for a particular architecture so far, 
only order-0 handling is kept separate.

BTW: I see the exact same speedups for dontneed that I see for munmap. 
For example, for order-9, it goes from 0.023412s -> 0.009785, so -58%. 
So I'm curious why you see a speedup for munmap but not for dontneed.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ