lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <4ACC41A2.8070801@lumino.de>
Date:	Wed, 07 Oct 2009 09:22:10 +0200
From:	Michael Schnell <mschnell@...ino.de>
To:	unlisted-recipients:; (no To-header on input)
CC:	linux-kernel@...r.kernel.org
Subject: Re: [RFC] Userspace RCU: (ab)using futexes to save cpu cycles and
 energy

Mathieu Desnoyers wrote:
> Hrm, your assumption about the common case does not seem to fit my
> scenarios. 
Yep,

Obviously I was misled.

So I now understand that you want to schedule the thread as a result to
some event and the Thread might already be running at that time, so that
it does not need to enter an OS-based wait state.

But to me this looks as if a _counting_ semaphore is needed here (at
least in a more general case), instead of a binary semaphore (such as
FUTEX does manage). Only with a counting semaphore no events are missed
(unless the user software design cares for this with other means, which
might be difficult or impossible).

Of course the fast path of a  user space counting semaphore is easily
doable with atomic_inc() and atomic_dec().

But I only know about two working variants of the user space code for
the (binary) FUTEX. There seem to be some more implementations that do
not work decently.

I have no idea if it's possible to create a counting semaphore in user
space that uses the "futex" syscall (or whatever) for the case the
threads needs to wait.

-Michael

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ