lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250203214911.898276-1-ankur.a.arora@oracle.com>
Date: Mon,  3 Feb 2025 13:49:07 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org
Cc: arnd@...db.de, catalin.marinas@....com, will@...nel.org,
        peterz@...radead.org, mark.rutland@....com, harisokn@...zon.com,
        cl@...two.org, memxor@...il.com, zhenglifeng1@...wei.com,
        joao.m.martins@...cle.com, boris.ostrovsky@...cle.com,
        konrad.wilk@...cle.com, Ankur Arora <ankur.a.arora@...cle.com>
Subject: [PATCH 0/4] barrier: Introduce smp_cond_load_*_timeout()

Hi,

This series adds waited variants of the smp_cond_load() primitives:
smp_cond_load_relaxed_timewait(), and smp_cond_load_acquire_timewait().

There are two known users for these interfaces:

 - poll_idle() [1]
 - resilient queued spinlocks [2]

For both of these cases we want to wait on a condition but also want
to terminate the wait at some point.

Now, in theory, that can be worked around by making the time check a
part of the conditional expression provided to smp_cond_load_*():

   smp_cond_load_relaxed(&cvar, !VAL || time_check());

That approach, however, runs into two problems:
 
  - smp_cond_load_*() only allow waiting on a condition: this might
    be okay when we are synchronously spin-waiting on the condition,
    but not on architectures where are actually waiting for a store
    to a cacheline.

  - this semantic problem becomes a real problem on arm64 if the
    event-stream is disabled. That means that there will be no
    asynchronous event (the event-stream) that periodically wakes
    the waiter, which might lead to an interminable wait if VAL is
    never written to.

This series extends the smp_cond_load_*() interfaces by adding two
arguments: a time-check expression and its associated time limit.
This is sufficient to allow for both a synchronously waited
implementation (like the generic cpu_relax() based loop), or one
where the CPU waits for a store to a cacheline with an out-of-band
timer.

Any comments appreciated!


Ankur

[1] https://lore.kernel.org/lkml/20241107190818.522639-3-ankur.a.arora@oracle.com/
[2] https://lore.kernel.org/lkml/20250107140004.2732830-9-memxor@gmail.com/

--
Cc: Arnd Bergmann <arnd@...db.de>
Cc: Will Deacon <will@...nel.org>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Kumar Kartikeya Dwivedi <memxor@...il.com>
Cc: linux-arch@...r.kernel.org

Ankur Arora (4):
  asm-generic: barrier: Add smp_cond_load_relaxed_timewait()
  asm-generic: barrier: Add smp_cond_load_acquire_timewait()
  arm64: barrier: Add smp_cond_load_relaxed_timewait()
  arm64: barrier: Add smp_cond_load_acquire_timewait()

 arch/arm64/include/asm/barrier.h | 74 ++++++++++++++++++++++++++++++++
 include/asm-generic/barrier.h    | 71 ++++++++++++++++++++++++++++++
 2 files changed, 145 insertions(+)

-- 
2.43.5


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ