lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200611240114.04877.hhh@imada.sdu.dk>
Date:	Fri, 24 Nov 2006 01:14:04 +0100
From:	Hans Henrik Happe <hhh@...da.sdu.dk>
To:	Jeff Garzik <jeff@...zik.org>
Cc:	Ulrich Drepper <drepper@...hat.com>,
	Evgeniy Polyakov <johnpol@....mipt.ru>,
	David Miller <davem@...emloft.net>,
	Andrew Morton <akpm@...l.org>, netdev <netdev@...r.kernel.org>,
	Zach Brown <zach.brown@...cle.com>,
	Christoph Hellwig <hch@...radead.org>,
	Chase Venters <chase.venters@...entec.com>,
	Johann Borck <johann.borck@...sedata.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [take25 1/6] kevent: Description.

On Thursday 23 November 2006 23:48, Jeff Garzik wrote:
> I'm really wondering is designing for N-threads-to-1-ring is the wisest 
> choice?
> 
> Considering current designs, it seems more likely that a single thread 
> polls for socket activity, then dispatches work.  How often do you 
> really see in userland multiple threads polling the same set of fds, 
> then fighting to decide who will handle raised events?

They should not fight, but gently divide event handling work.
 
> More likely, you will see "prefork" (start N threads, each with its own 
> ring) 

One ring could be more busy than others, leaving all the work to one thread.

> or a worker pool (single thread receives events, then dispatches  
> to multiple threads for execution) or even one-thread-per-fd (single 
> thread receives events, then starts new thread for handling).

This is more like fighting :-) 
It adds context switches and therefore extra latency for event handling. 
 
> If you have multiple threads accessing the same ring -- a poor design 
> choice -- I would think the burden should be on the application, to 
> provide proper synchronization.

Comming from the HPC world I do not agree. Context switches should be avoided. 
This paper is a good example from the HPC world: 

http://cobweb.ecn.purdue.edu/~vpai/Publications/majumder-lacsi04.pdf.

The latency problems introduced by context switches in this work calls for 
even more functionality in event handling. I will not go into details now. 
There are enough problems with kevent's current feature set and I believe 
these extra features can be added later without breaking the API.

--

Hans Henrik Happe
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ