lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 20 Sep 2008 11:52:31 -0700
From:	Daniel Walker <dwalker@...sta.com>
To:	Arjan van de Ven <arjan@...radead.org>
Cc:	David Miller <davem@...emloft.net>, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org, jens.axboe@...cle.com,
	steffen.klassert@...unet.com
Subject: Re: [PATCH 0/2]: Remote softirq invocation infrastructure.

On Sat, 2008-09-20 at 11:09 -0700, Arjan van de Ven wrote:


> > 
> > Dave didn't supply the users of his code, or what kind of improvement
> > was seen, or the case in which it would be needed. I think Dave knowns
> > his subsystem, but the code on the surface looks like an end run
> > around some other problem area..
> 
> it's very fundamental, and has been talked about at various conferences
> as well.

At least you understand that not everyone goes to conferences..

> the basic problem is that the submitter of the IO (be it block or net)
> creates a ton of metadata state on submit, and ideally the completion
> processing happens on the same CPU, for two reasons
> 1) to use the state in the cache
> 2) for the case where you touch userland data/structures, we assume the
> scheduler kept affinity
> 
> it's a Moses-to-the-Mountain problem, except we have four Moses' but
> only one Mountain. 
> 
> Or in CS terms: we move the work to the CPU where the userland is
> rather than moving the userland to the IRQ CPU, since there is usually
> only one IRQ but many userlands and many cpu cores.

There must be some kind of trade off here .. There's a fairly good
performance gain from have the softirq asserted and run on the same cpu
since it runs in interrupt context right after the interrupt.

If you move the softirq to another cpu then you have to re-assert and
either wait for ksoftirqd to handle it or wait for an interrupt on the
new cpu .. Neither is very predictable..

All that vs. bouncing data around the caches.. To what degree has all
that been handled or thought about?

Daniel

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists