lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 30 May 2023 10:20:09 +0200
From:   Florian Weimer <fweimer@...hat.com>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
        "Paul E . McKenney" <paulmck@...nel.org>,
        Boqun Feng <boqun.feng@...il.com>,
        "H . Peter Anvin" <hpa@...or.com>, Paul Turner <pjt@...gle.com>,
        linux-api@...r.kernel.org, Christian Brauner <brauner@...nel.org>,
        David.Laight@...LAB.COM, carlos@...hat.com,
        Peter Oskolkov <posk@...k.io>,
        Alexander Mikhalitsyn <alexander@...alicyn.com>,
        Chris Kennelly <ckennelly@...gle.com>,
        Ingo Molnar <mingo@...hat.com>,
        Darren Hart <dvhart@...radead.org>,
        Davidlohr Bueso <dave@...olabs.net>,
        André Almeida <andrealmeid@...lia.com>,
        libc-alpha@...rceware.org, Steven Rostedt <rostedt@...dmis.org>,
        Jonathan Corbet <corbet@....net>,
        Noah Goldstein <goldstein.w.n@...il.com>, longman@...hat.com
Subject: Re: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq

* Mathieu Desnoyers:

>> I don't see why we can't stick this directly into struct rseq because
>> it's all public anyway.
>
> The motivation for moving this to a different cache line is to handle
> the prior comment from Boqun, who is concerned that busy-waiting
> repeatedly loading a field from struct rseq will cause false-sharing
> and make other stores to that cache line slower, especially stores to
> rseq_cs to begin rseq critical sections, thus slightly increasing the
> overhead of rseq critical sections taken while mutexes are held.

Hmm.  For context, in glibc, we have to place struct rseq on a fixed
offset from the start of a page (or even some larger alignment) for all
threads.  In the future (once we move the thread control block off the
top of the userspace stack, where it resides since the LinuxThreads
days), it is likely that the pointer difference between different
threads will also be a multiple of a fairly large power of two
(something like 2**20 might be common).  Maybe this will make caching
even more difficult?

> If we want to embed this field into struct rseq with its own cache
> line, then we need to add a lot of padding, which is inconvenient.
>
> That being said, perhaps this is premature optimization, what do you
> think ?

Maybe?  I don't know how the access patterns will look like.  But I
suspect that once we hit this case, performance will not be great
anyway, so the optimization is perhaps unnecessary?

The challenge is that once we put stuff at fixed offsets, we can't
transparently fix it later.  It would need more auxv entries with
further offsets, or accessing this data through some indirection,
perhaps via vDSO helpers.

>> The TID field would be useful in its own right.
>
> Indeed, good point.
>
> While we are there, I wonder if we should use the thread_pointer() as
> lock identifier, or if the address of struct rseq is fine ?

Hard to tell until we'll see what the futex integration looks like, I
think.

Thanks,
Florian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ