lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070226203543.GB23357@elte.hu>
Date:	Mon, 26 Feb 2007 21:35:43 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Evgeniy Polyakov <johnpol@....mipt.ru>
Cc:	Ulrich Drepper <drepper@...hat.com>, linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Christoph Hellwig <hch@...radead.org>,
	Andrew Morton <akpm@....com.au>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Zach Brown <zach.brown@...cle.com>,
	"David S. Miller" <davem@...emloft.net>,
	Suparna Bhattacharya <suparna@...ibm.com>,
	Davide Libenzi <davidel@...ilserver.org>,
	Jens Axboe <jens.axboe@...cle.com>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [patch 00/13] Syslets, "Threadlets", generic AIO support, v3


* Evgeniy Polyakov <johnpol@....mipt.ru> wrote:

> If kernelspace rescheduling is that fast, then please explain me why 
> userspace one always beats kernel/userspace?

because 'user space scheduling' makes no sense? I explained my thinking 
about that in a past mail:

-------------------------->
One often repeated (because pretty much only) performance advantage of 
'light threads' is context-switch performance between user-space 
threads. But reality is, nobody /cares/ about being able to 
context-switch between "light user-space threads"! Why? Because there 
are only two reasons why such a high-performance context-switch would 
occur:

 1) there's contention between those two tasks. Wonderful: now two
    artificial threads are running on the /same/ CPU and they are even
    contending each other. Why not run a single context on a single CPU
    instead and only get contended if /another/ CPU runs a conflicting
    context?? While this makes for nice "pthread locking benchmarks",
    it is not really useful for anything real.

 2) there has been an IO event. The thing is, for IO events we enter the
    kernel no matter what - and we'll do so for the next 10 years at
    minimum. We want to abstract away the hardware, we want to do
    reliable resource accounting, we want to share hardware resources,
    we want to rate-limit, etc., etc. While in /theory/ you could handle
    IO purely from user-space, in practice you dont want to do that. And
    if we accept the premise that we'll enter the kernel anyway, there's
    zero performance difference between scheduling right there in the
    kernel, or returning back to user-space to schedule there. (in fact
    i submit that the former is faster). Or if we accept the theoretical
    possibility of 'perfect IO hardware' that implements /all/ the
    features that the kernel wants (in a secure and generic way, and
    mind you, such IO hardware does not exist yet), then /at most/ the
    performance advantage of user-space doing the scheduling is the
    overhead of a null syscall entry. Which is a whopping 100 nsecs on
    modern CPUs! That's roughly the latency of a /single/ DRAM access!
....
<-----------

(see http://lwn.net/Articles/219958/)

btw., the words that follow that section are quite interesting in 
retrospect:

| Furthermore, 'light thread' concepts can no way approach the 
| performance of #2 state-machines: if you /know/ what the structure of 
| your context is, and you can program it in a specialized state-machine 
| way, there's just so many shortcuts possible that it's not even funny.

[ oops! ;-) ]

i severely under-estimated the kind of performance one can reach even 
with pure procedural concepts. Btw., when i wrote this mail was when i 
started thinking about "is it really true that the only way to get good 
performance are 100% event-based servers and nonblocking designs?", and 
started coding syslets and then threadlets.

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ