lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 08 Oct 2007 21:13:59 -0400
From:	Jeff Garzik <jeff@...zik.org>
To:	hadi@...erus.ca
CC:	"Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>,
	David Miller <davem@...emloft.net>, krkumar2@...ibm.com,
	johnpol@....mipt.ru, herbert@...dor.apana.org.au, kaber@...sh.net,
	shemminger@...ux-foundation.org, jagana@...ibm.com,
	Robert.Olsson@...a.slu.se, rick.jones2@...com, xma@...ibm.com,
	gaagaan@...il.com, netdev@...r.kernel.org, rdreier@...co.com,
	mcarlson@...adcom.com, mchan@...adcom.com,
	general@...ts.openfabrics.org, tgraf@...g.ch,
	randy.dunlap@...cle.com, sri@...ibm.com
Subject: Re: [PATCH 2/3][NET_BATCH] net core use batching

jamal wrote:
> Ok, so the "concurency" aspect is what worries me. What i am saying is
> that sooner or later you have to serialize (which is anti-concurency)
> For example, consider CPU0 running a high prio queue and CPU1 running
> the low prio queue of the same netdevice.
> Assume CPU0 is getting a lot of interupts or other work while CPU1
> doesnt (so as to create a condition that CPU1 is slower). Then as long
> as there packets and there is space on the drivers rings, CPU1 will send
> more packets per unit time than CPU0.
> This contradicts the strict prio scheduler which says higher priority
> packets ALWAYS go out first regardless of the presence of low prio
> packets.  I am not sure i made sense.

You made sense.  I think it is important to note simply that the packet 
scheduling algorithm itself will dictate the level of concurrency you 
can achieve.

Strict prio is fundamentally an interface to a big imaginary queue, with 
multiple packet insertion points (the individual bands/rings for each 
prio band).

If you assume a scheduler implementation where each prio band is mapped 
to a separate CPU, you can certainly see where some CPUs could be 
substantially idle while others are overloaded, largely depending on the 
data workload (and priority contained within).

Moreover, you increase L1/L2 cache traffic, not just because of locks, 
but because of data dependencies:

	user	prio	packet		NIC TX ring
	process	band	scheduler

	cpu7	1	cpu1		1
	cpu5	1	cpu1		1
	cpu2	0	cpu0		0

At that point, it is probably more cache- and lock-friendly to keep the 
current TX softirq scheme.

In contrast, a pure round-robin approach is more friendly to concurrency.

	Jeff


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ