lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <zhd6njnv63lithg5yetvyniwt34wcltxa5huk4ustp7j7pf2na@6v6qehyb3w3g>
Date:   Thu, 28 Sep 2023 06:40:53 -0700
From:   Davidlohr Bueso <dave@...olabs.net>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     tglx@...utronix.de, axboe@...nel.dk, linux-kernel@...r.kernel.org,
        mingo@...hat.com, dvhart@...radead.org, andrealmeid@...lia.com,
        Andrew Morton <akpm@...ux-foundation.org>, urezki@...il.com,
        hch@...radead.org, lstoakes@...il.com,
        Arnd Bergmann <arnd@...db.de>, linux-api@...r.kernel.org,
        linux-mm@...ck.org, linux-arch@...r.kernel.org,
        malteskarupke@....de, steve.shaw@...el.com,
        marko.makela@...iadb.com, andrei.artemev@...el.com
Subject: Re: futex2 numa stuff

On Fri, 22 Sep 2023, Peter Zijlstra wrote:

>Hi!
>
>Updated version of patch 15/15 and a few extra patches for testing the
>FUTEX2_NUMA bits. The last patch (17/15) should never be applied for anything
>you care about and exists purely because I'm too lazy to generate actual
>hash-bucket contention.
>
>On my 2 node IVB-EP:
>
> $ echo FUTEX_SQUASH > /debug/sched/features
>
>Effectively reducing each node to 1 bucket.
>
> $ numactl -m0 -N0 ./futex_numa -c10 -t2 -n0 -N0 &
>   numactl -m1 -N1 ./futex_numa -c10 -t2 -n0 -N0
>
> ...
> contenders: 16154935
> contenders: 16202472
>
> $ numactl -m0 -N0 ./futex_numa -c10 -t2 -n0 -N0 &
>   numactl -m1 -N1 ./futex_numa -c10 -t2 -n0 -N1
>
> contenders: 48584991
> contenders: 48680560
>
>(loop counts, higher is better)
>
>Clearly showing how separating the hashes works.
>
>The first one runs 10 contenders on each node but forces the (numa) futex to
>hash to node 0 for both. This ensures all 20 contenders hash to the same
>bucket and *ouch*.
>
>The second one does the same, except now fully separates the nodes. Performance
>is much improved.
>
>Proving the per-node hashing actually works as advertised.

Very nice.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ