lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090207235033.GC3557@Krystal>
Date:	Sat, 7 Feb 2009 18:50:33 -0500
From:	Mathieu Desnoyers <compudj@...stal.dyndns.org>
To:	Kyle Moffett <kyle@...fetthome.net>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	ltt-dev@...ts.casi.polymtl.ca, linux-kernel@...r.kernel.org,
	Robert Wisniewski <bob@...son.ibm.com>
Subject: Re: [RFC git tree] Userspace RCU (urcu) for Linux (repost)

* Kyle Moffett (kyle@...fetthome.net) wrote:
> On Thu, Feb 5, 2009 at 11:58 PM, Mathieu Desnoyers
> <compudj@...stal.dyndns.org> wrote:
> > I figured out I needed some userspace RCU for the userspace tracing part
> > of LTTng (for quick read access to the control variables) to trace
> > userspace pthread applications. So I've done a quick-and-dirty userspace
> > RCU implementation.
> >
> > It works so far, but I have not gone through any formal verification
> > phase. It seems to work on paper, and the tests are also OK (so far),
> > but I offer no guarantee for this 300-lines-ish 1-day hack. :-) If you
> > want to comment on it, it would be welcome. It's a userland-only
> > library. It's also currently x86-only, but only a few basic definitions
> > must be adapted in urcu.h to port it.
> 
> I have actually been fiddling with an RCU-esque design for a
> multithreaded event-driven userspace server process.  Essentially all
> threads using RCU-protected data run through a central event loop
> which drives my entirely-userspace RCU state machine.  I actually have
> a cooperative scheduler for groups of events to allow me to
> load-balance a large number of clients without the full overhead of a
> kernel thread per client.  This does rely on
> clock_gettime(CLOCK_THREAD_CPUTIME_ID) returning a useful monotonic
> value, however.
> 
> By building the whole internal system as an
> event-driven-state-machine, I don't need to keep a stack for blocked
> events.  The events which do large amounts of work call a
> "need_resched()"-ish function every so often, and if it returns true
> they return up the stack.  Relatively few threads (1 per physical CPU,
> plus a few for blocking event polling) are needed to completely
> saturate the system.
> 
> For RCU I simply treat event-handler threads the way the kernel treats
> CPUs, I report a Quiescent State every so often in-between processing
> events.
> 
> The event-handling mechanism is entirely agnostic to the way that
> events are generated.  It has built-in mechanisms for FD, signal, and
> AIO-based events, and it's trivial to add another event-polling thread
> for GTK/Qt/etc.
> 
> I'm still only halfway through laying out the framework for this
> library, but once it's done I'll make sure to post it somewhere for
> those who are interested.
> 

That would be interesting to look at. It would indeed be very efficient
at the reader site, because no barriers would be required. However, it
might not be appropriate for use-cases like userspace tracing, where we
ideally want to add tracing functionnality to applications as a library,
without requiring to modify the application behavior (e.g. adding a
"quiescent state" call into the application loop). I also think Paul
already has such application quiescent state notification implementation
in the links he gave us, we might want to compare those two.

Mathieu

> Cheers,
> Kyle Moffett
> 

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ