lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150106121806.GC26845@kmo-pixel>
Date:	Tue, 6 Jan 2015 04:18:06 -0800
From:	Kent Overstreet <kmo@...erainc.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Sedat Dilek <sedat.dilek@...il.com>,
	Dave Jones <davej@...emonkey.org.uk>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>, Chris Mason <clm@...com>
Subject: Re: Linux 3.19-rc3

On Tue, Jan 06, 2015 at 12:58:22PM +0100, Peter Zijlstra wrote:
> On Tue, Jan 06, 2015 at 03:07:30AM -0800, Kent Overstreet wrote:
> > http://evilpiepirate.org/git/linux-bcache.git/log/?h=aio_ring_fix
> 
> Very terse changelogs there :/

erg, I've been slacking on changelogs lately. that closure_sync() fix definitely
merits explanation.

> Also, I'm not sure I agree with that whole closure_wait_event*() stuff,
> the closure interface as it exist before that makes sense, but now
> you're just mixing up things.
> 
> Why would you want to retrofit a lot of the wait_event*() stuff on top
> of this?

Actually it's not retrofitted, closure_wait_event() dates to the very original
closure code, it was dropped for awhile because bcache happened not to be using
it anymore and I just dug it out of the git history.

Think of it this way - closures wait on things: sometimes you want to wait
asynchronously, sometimes synchronously, but you want the same primitives for
both - something has to bridge the gap between the async and sync stuff.

For example - here's the code in the bcache-dev branch that handles reading the
journal from each device in the cache set in parallel:

http://evilpiepirate.org/git/linux-bcache.git/tree/drivers/md/bcache/journal.c?h=bcache-dev#n399

It's using closure_call() to kick off the read for each device, then
closure_sync() to wait on them all to finish.

So closure_sync() is completely necessary, and then once you've got that
closure_wait_event() is just a trivial macro.

Also, closures could be using wait_queue_head_t instead of closure waitlist,
mainly I didn't want to nearly double the size of closures to stuff in a
__wait_queue.

I'd argue that "closures the junk for writing weird pseudo continuation passing
style asynchronous C" are not really the important parts of closures, the
important part is the infrastructure for waiting on stuff and then doing
something when that stuff completes. closure_get(), closure_put() and waitlists
are the real primitives; both closure_sync() and all the fancy asynchronous
stuff are built on top of that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ