lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 23 May 2024 16:39:33 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: André Almeida <andrealmeid@...lia.com>,
 Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
 "Paul E . McKenney" <paulmck@...nel.org>, Boqun Feng <boqun.feng@...il.com>,
 "H . Peter Anvin" <hpa@...or.com>, Paul Turner <pjt@...gle.com>,
 linux-api@...r.kernel.org, Christian Brauner <brauner@...nel.org>,
 Florian Weimer <fw@...eb.enyo.de>, David.Laight@...LAB.COM,
 carlos@...hat.com, Peter Oskolkov <posk@...k.io>,
 Alexander Mikhalitsyn <alexander@...alicyn.com>,
 Chris Kennelly <ckennelly@...gle.com>, Ingo Molnar <mingo@...hat.com>,
 Darren Hart <dvhart@...radead.org>, Davidlohr Bueso <dave@...olabs.net>,
 libc-alpha@...rceware.org, Steven Rostedt <rostedt@...dmis.org>,
 Jonathan Corbet <corbet@....net>, Noah Goldstein <goldstein.w.n@...il.com>,
 longman@...hat.com, kernel-dev@...lia.com
Subject: Re: [PATCH v2 0/1] Add FUTEX_SPIN operation

On 2024-05-23 16:07, André Almeida wrote:
> Hi,
> 
> In the last LPC, Mathieu Desnoyers and I presented[0] a proposal to extend the
> rseq interface to be able to implement spin locks in userspace correctly. Thomas
> Gleixner agreed that this is something that Linux could improve, but asked for
> an alternative proposal first: a futex operation that allows to spin a user
> lock inside the kernel. This patchset implements a prototype of this idea for
> further discussion.
> 
> With FUTEX2_SPIN flag set during a futex_wait(), the futex value is expected to
> be the TID of the lock owner. Then, the kernel gets the task_struct of the
> corresponding TID, and checks if it's running. It spins until the futex
> is awaken, the task is scheduled out or if a timeout happens.  If the lock owner
> is scheduled out at any time, then the syscall follows the normal path of
> sleeping as usual. The user input is masked with FUTEX_TID_MASK so we have some
> bits to play.
> 
> If the futex is awaken and we are spinning, we can return to userspace quickly,
> avoid the scheduling out and in again to wake from a futex_wait(), thus
> speeding up the wait operation. The user input is masked with FUTEX_TID_MASK so
> we have some bits to play.
> 
> Christian Brauner suggested using pidfd to avoid race conditions, and I will
> implement that in the next patch iteration. I benchmarked the implementation
> measuring the time required to wait for a futex for a simple loop using the code
> at [2]. In my setup, the total wait time for 1000 futexes using the spin method
> was almost 10% lower than just using the normal futex wait:
> 
> 	Testing with FUTEX2_SPIN | FUTEX_WAIT
> 	Total wait time: 8650089 usecs
> 
> 	Testing with FUTEX_WAIT
> 	Total wait time: 9447291 usecs
> 
> However, as I played with how long the lock owner would be busy, the
> benchmark results of spinning vs no spinning would match, showing that the
> spinning will be effective for some specific scheduling scenarios, but depending
> on the wait time, there's no big difference either spinning or not.
> 
> [0] https://lpc.events/event/17/contributions/1481/
> 
> You can find a small snippet to play with this interface here:
> 
> [1] https://gist.github.com/andrealmeid/f0b8c93a3c7a5c50458247c47f7078e1

What exactly are you trying to benchmark here ? I've looked at this toy
program, and I suspect that most of the delay you observe is due to
initial scheduling of a newly cloned thread, because this is what is
repeatedly being done in the delay you measure.

I would recommend to change this benchmark program to measure something
meaningful, e.g.:

- N threads repeatedly contending on a lock, until a "stop" flag is set,
- run for "duration" seconds, after which main() sets a "stop" flag.
- delay loop of "work_delay" us within the lock critical section,
- delay loop of "inactive_delay" us between locking attempts,
- measure the time it takes to grab the lock, report stats on this,
- measure the total number of operations done within the given
   "duration".
- report statistics on the number of operations per thread to see
   the impact on fairness,

The run the program with the following constraints:

- Pin one thread per core, with nb thread <= nb cores. This should
   be a best case scenario for spinning.
- Pin all threads to a single core. when nb threads > nb cores, this
   should be the worse scenario for spinning.
- Groups things between those two extremes to see how things evolve.

I would not be surprised that if you measure relevant delays, you will
observe much better speedups than what you currently have.

Thanks,

Mathieu

> 
> Changelog:
> 
> v1: - s/PID/TID
>      - masked user input with FUTEX_TID_MASK
>      - add benchmark tool to the cover letter
>      - dropped debug prints
>      - added missing put_task_struct()
> 
> André Almeida (1):
>    futex: Add FUTEX_SPIN operation
> 
>   include/uapi/linux/futex.h |  2 +-
>   kernel/futex/futex.h       |  6 ++-
>   kernel/futex/waitwake.c    | 78 +++++++++++++++++++++++++++++++++++++-
>   3 files changed, 82 insertions(+), 4 deletions(-)
> 

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ