lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 27 Feb 2007 13:28:32 +0300
From:	Evgeniy Polyakov <johnpol@....mipt.ru>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ulrich Drepper <drepper@...hat.com>,
	linux-kernel@...r.kernel.org,
	Arjan van de Ven <arjan@...radead.org>,
	Christoph Hellwig <hch@...radead.org>,
	Andrew Morton <akpm@....com.au>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Zach Brown <zach.brown@...cle.com>,
	"David S. Miller" <davem@...emloft.net>,
	Suparna Bhattacharya <suparna@...ibm.com>,
	Davide Libenzi <davidel@...ilserver.org>,
	Jens Axboe <jens.axboe@...cle.com>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [patch 00/13] Syslets, "Threadlets", generic AIO support, v3

On Mon, Feb 26, 2007 at 08:54:16PM +0100, Ingo Molnar (mingo@...e.hu) wrote:
> 
> * Linus Torvalds <torvalds@...ux-foundation.org> wrote:
> 
> > > Reading from the disk is _exactly_ the same - the same waiting for 
> > > buffer_heads/pages, and (since it is bigger) it can be easily 
> > > transferred to event driven model. Ugh, wait, it not only _can_ be 
> > > transferred, it is already done in kevent AIO, and it shows faster 
> > > speeds (though I only tested sending them over the net).
> > 
> > It would be absolutely horrible to program for. Try anything more 
> > complex than read/write (which is the simplest case, but even that is 
> > nasty).
> 
> note that even for something as 'simple and straightforward' as TCP 
> sockets, the 25-50 lines of evserver code i worked on today had 3 
> separate bugs, is known to be fundamentally incorrect and one of the 
> bugs (the lost event problem) showed up as a subtle epoll performance 
> problem and it took me more than an hour to track down. And that matches 
> my Tux experience as well: event based models are horribly hard to debug 
> BECAUSE there is /no procedural state associated with requests/. Hence 
> there is no real /proof of progress/. Not much to use for debugging - 
> except huge logs of execution, which, if one is unlucky (which i often 
> was with Tux) would just make the problem go away.

I'm glad you found a bug in my epoll processing code (which does not exist
in kevent part though) - it took me more than a year after kevent
introduction to make someone to look into.

Obviously there are bugs, it is simply how things work.
And debugging state machine code has exactly the same complexity as
debugging multi-threading code - if not less...

> Furthermore, with a 'retry' model, who guarantees that the retry wont be 
> an infinite retry where none of the retries ever progresses the state of 
> the system enough to return the data we are interested in? The moment we 
> have to /retry/, depending on the 'depth' of how deep the retry kicked 
> in, we've got to reach that 'depth' of code again and execute it.
> 
> plus, 'there is not much state' is not even completely true to begin 
> with, even in the most simple, TCP socket case! There /is/ quite a bit 
> of state constructed on the kernel stack: user parameters have been 
> evaluated/converted, the socket has been looked up, its state has been 
> validated, etc. With a 'retry' model - but even with a pure 'event 
> queueing' model we redo all those things /both/ at request submission 
> and at event generation time, again and again - while with a synchronous 
> syscall you do it just once and upon event completion a piece of that 
> data is already on the kernel stack. I'd much rather spend time and 
> effort on simplifying the scheduler and reducing the cache footprint of 
> the kernel thread context switch path, etc., to make it more useful even 
> in more extreme, highly prallel '100% context-switching' case, because i 
> have first-hand experience about how fragile and inflexible event based 
> servers are. I do think that event interfaces for raw, external physical 
> events make sense in some circumstances, but for any more complex 
> 'derived' event type it's less and less clear whether we want a direct 
> interface to it. For something like the VFS it's outright horrible to 
> even think about.

Ingo, you draw too much black into the picture.
No one will crate purely event based model for socket or VFS processing
- event is completion of the request - data in the buffer, data was sent
and so one - it is also possible to add events into middle of the
processing especially if operation can be logically separated - like
sendfile.

> 	Ingo

-- 
	Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ