lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0fd31bb1-6b76-4d27-9365-4dedfc323b2c@kernel.dk>
Date: Wed, 2 Oct 2024 10:14:05 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
 Matthew Wilcox <willy@...radead.org>
Cc: paulmck@...nel.org, Linus Torvalds <torvalds@...ux-foundation.org>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Peter Zijlstra <peterz@...radead.org>, linux-kernel@...r.kernel.org,
 Nicholas Piggin <npiggin@...il.com>, Michael Ellerman <mpe@...erman.id.au>,
 Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
 Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
 Will Deacon <will@...nel.org>, Boqun Feng <boqun.feng@...il.com>,
 Alan Stern <stern@...land.harvard.edu>, John Stultz <jstultz@...gle.com>,
 Neeraj Upadhyay <Neeraj.Upadhyay@....com>,
 Frederic Weisbecker <frederic@...nel.org>,
 Joel Fernandes <joel@...lfernandes.org>,
 Josh Triplett <josh@...htriplett.org>, Uladzislau Rezki <urezki@...il.com>,
 Steven Rostedt <rostedt@...dmis.org>, Lai Jiangshan
 <jiangshanlai@...il.com>, Zqiang <qiang.zhang1211@...il.com>,
 Ingo Molnar <mingo@...hat.com>, Waiman Long <longman@...hat.com>,
 Mark Rutland <mark.rutland@....com>, Thomas Gleixner <tglx@...utronix.de>,
 Vlastimil Babka <vbabka@...e.cz>, maged.michael@...il.com,
 Mateusz Guzik <mjguzik@...il.com>,
 Jonas Oberhauser <jonas.oberhauser@...weicloud.com>, rcu@...r.kernel.org,
 linux-mm@...ck.org, lkmm@...ts.linux.dev
Subject: Re: [RFC PATCH 0/4] sched+mm: Track lazy active mm existence with
 hazard pointers

On 10/2/24 10:02 AM, Mathieu Desnoyers wrote:
> On 2024-10-02 17:58, Jens Axboe wrote:
>> On 10/2/24 9:53 AM, Mathieu Desnoyers wrote:
>>> On 2024-10-02 17:36, Mathieu Desnoyers wrote:
>>>> On 2024-10-02 17:33, Matthew Wilcox wrote:
>>>>> On Wed, Oct 02, 2024 at 11:26:27AM -0400, Mathieu Desnoyers wrote:
>>>>>> On 2024-10-02 16:09, Paul E. McKenney wrote:
>>>>>>> On Tue, Oct 01, 2024 at 09:02:01PM -0400, Mathieu Desnoyers wrote:
>>>>>>>> Hazard pointers appear to be a good fit for replacing refcount based lazy
>>>>>>>> active mm tracking.
>>>>>>>>
>>>>>>>> Highlight:
>>>>>>>>
>>>>>>>> will-it-scale context_switch1_threads
>>>>>>>>
>>>>>>>> nr threads (-t)     speedup
>>>>>>>>        24                +3%
>>>>>>>>        48               +12%
>>>>>>>>        96               +21%
>>>>>>>>       192               +28%
>>>>>>>
>>>>>>> Impressive!!!
>>>>>>>
>>>>>>> I have to ask...  Any data for smaller numbers of CPUs?
>>>>>>
>>>>>> Sure, but they are far less exciting ;-)
>>>>>
>>>>> How many CPUs in the system under test?
>>>>
>>>> 2 sockets, 96-core per socket:
>>>>
>>>> CPU(s):                   384
>>>>     On-line CPU(s) list:    0-383
>>>> Vendor ID:                AuthenticAMD
>>>>     Model name:             AMD EPYC 9654 96-Core Processor
>>>>       CPU family:           25
>>>>       Model:                17
>>>>       Thread(s) per core:   2
>>>>       Core(s) per socket:   96
>>>>       Socket(s):            2
>>>>       Stepping:             1
>>>>       Frequency boost:      enabled
>>>>       CPU(s) scaling MHz:   68%
>>>>       CPU max MHz:          3709.0000
>>>>       CPU min MHz:          400.0000
>>>>       BogoMIPS:             4800.00
>>>>
>>>> Note that Jens Axboe got even more impressive speedups testing this
>>>> on his 512-hw-thread EPYC [1] (390% speedup for 192 threads). I've
>>>> noticed I had schedstats and sched debug enabled in my config, so I'll have to re-run my tests.
>>>
>>> A quick re-run of the 128-thread case with schedstats and sched debug
>>> disabled still show around 26% speedup, similar to my prior numbers.
>>>
>>> I'm not sure why Jens has much better speedups on a similar system.
>>>
>>> I'm attaching my config in case someone spots anything obvious. Note
>>> that my BIOS is configured to show 24 NUMA nodes to the kernel (one
>>> NUMA node per core complex).
>>
>> Here's my .config - note it's from the stock kernel run, which is why it
>> still has:
>>
>> CONFIG_MMU_LAZY_TLB_REFCOUNT=y
>>
>> set. Have the same numa configuration as you, just end up with 32 nodes
>> on this box.
> 
> Just to make sure: did you use other command line options when starting
> the test program (other than -t N ?).

I did not, this is literally what I ran:

for i in 24 48 96 192 256 512 1024 2048; do echo $i threads; timeout -s INT -k 30 30 ./context_switch1_threads -t $i; done

and the numbers I got were very stable between runs and reboots.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ