[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1222496261.6710.71.camel@charm-linux>
Date: Sat, 27 Sep 2008 01:17:41 -0500
From: Tom Zanussi <zanussi@...cast.net>
To: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Cc: Martin Bligh <mbligh@...gle.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
prasad@...ux.vnet.ibm.com,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Mathieu Desnoyers <compudj@...stal.dyndns.org>,
Steven Rostedt <rostedt@...dmis.org>, od@...e.com,
"Frank Ch. Eigler" <fche@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>, hch@....de,
David Wilder <dwilder@...ibm.com>
Subject: [RFC PATCH 0/10] relay revamp, third installment
Here's the current relay cleanup patchset.
1-2 make the write path completely replaceable.
3 adds flags along with some related cleanup.
4-8 remove the padding in several stages.
The new patches in this set are:
9 simplifies the callbacks - now that we have flags, the subbuf_start
callback is much simpler, has been combined with notify_consumers and
has been renamed new_subbuf. Because part of the simplification has
been to handle buffer-full conditions and count lost events internally,
normal applications don't have to pay attention to it at all.
10 completely removes the idea of sub-buffers completely and now deals
only with pages. relay_open() channges accordingly - buffer sizes are
now in pages and consumers are woken only every n_wakeup pages, or never
if this is 0.
It's a work in progress, but because I wanted the intermediate stages to
actually work and not break anything, some of these patches, especially
05, are just temporary and will be removed in the next iteration.
I didn't have time to clean up the first 3 either - I'll also do that
the next time around.
In the next round I plan to do vmap removal.
Tom
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists