lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 02 Jul 2008 15:44:07 +1000
From:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
To:	Arjan van de Ven <arjan@...ux.intel.com>
Cc:	ksummit-2008-discuss@...ts.linux-foundation.org,
	Linux Kernel list <linux-kernel@...r.kernel.org>,
	Jeremy Kerr <jk@...abs.org>
Subject: Re: [Ksummit-2008-discuss] Delayed interrupt work, thread pools


> how much of this would be obsoleted if we had irqthreads ?

I'm not sure irqthreads is what I want...

First, can they call handle_mm_fault ? (ie, I'm not sure precisely what
kind of context those operate into).

But even if that's ok, it doesn't quite satisfy my primary needs unless
we can fire off an irqthread per interrupt -occurence- rather than
having an irqthread per source.

There is two aspects to the problem. The less important is that I need
to be able to service other interrupts from that source
after firing off the "job".

For example, the GFX chip or the SPU in my case takes a page fault when
accessing the user mm context it's attached to, I fire off a thread to
handle it (which I attach/detach from the mm, catch signals, etc...),
but that doesn't stop execution. Transfers to/from main memory on the
SPU (and to some extend on graphic chips) are asynchronous and thus the
SPU can still run and emit other interrupts representing different
conditions (though not other page faults).

The second aspect which is more important in the SPU case is that they
context switch. While an SPU context causes a page fault, and I fire off
that thread to service it, I want to be able to context switch some
other context on the SPU which will itself emit interrupts etc... on
that same source.

I could get away by simply allocating a kernel thread per SPU context,
and that's what we're going to do in our proof-of-concept
implementation, but I was hoping to avoid it with the thread pools in
the long run, thus saving a few resources left and right and loading the
main scheduler lists less with huge amount of mostly idle threads.

Now regarding the other usage scenario mentioned here (XPC and the NFS
server) that already have thread pools, how much of these would be also
replaced by irqthreads ? I don't think much off hand but I can't say for
sure until I have a look ... Again, that may be me just not
understanding what irqthreads are but it looks to me that they are one
thread per IRQ source or so, not the ability for a single IRQ source to
fire off multiple threads. Maybe if irqthreads could fork() that would
be an option... 

In any case, Dave messages imply we have at least two existing in tree
thread pool implementations for two users and possibly spufs being a 3rd
one (I'm keeping graphics at bay for now as I see that being a more long
term scenario). Probably worth looking at some consolidation.

Anyway, time for me to go look at the XPC and NFS code and see if there
is anything worth putting in common in there. Might take me a little
while, there is nothing urgent (which is why I was thinking about a KS
chat but the list is fine too), we are doing a proof-of-concept
implementation using per-context threads in the meantime anyway.

Cheers,
Ben.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ