lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1181741602.4050.116.camel@localhost>
Date:	Wed, 13 Jun 2007 09:33:22 -0400
From:	jamal <hadi@...erus.ca>
To:	Robert Olsson <Robert.Olsson@...a.slu.se>
Cc:	Zhu Yi <yi.zhu@...el.com>,
	Leonid Grossman <Leonid.Grossman@...erion.com>,
	Patrick McHardy <kaber@...sh.net>,
	David Miller <davem@...emloft.net>,
	peter.p.waskiewicz.jr@...el.com, netdev@...r.kernel.org,
	jeff@...zik.org, auke-jan.h.kok@...el.com
Subject: Re: [PATCH] NET: Multiqueue network device support.


Wow - Robert in the house, I cant resist i have to say something before
i run out;->

On Wed, 2007-13-06 at 15:12 +0200, Robert Olsson wrote:

>  Haven't got all details. IMO we need to support some "bonding-like" 
>  scenario too. Where one CPU is feeding just one TX-ring. (and TX-buffers
>  cleared by same CPU). We probably don't want to stall all queuing when
>  when one ring is full. 
>  

For newer NICs - the kind of that Leonid Grossman was talking about,
makes a lot of sense in non-virtual environment.
I think the one described by Leonid has not just 8 tx/rx rings but also
a separate register set, MSI binding etc iirc. The only shared resources
as far as i understood Leonid are the bus and the ethernet wire.

So in such a case (assuming 8 rings), 
One model is creating 4 netdev devices each based on single tx/rx ring
and register set and then having a mother netdev (what you call the
bond) that feeds these children netdev based on some qos parametrization
is very sensible. Each of the children netdevices (by virtue of how we
do things today) could be tied to a CPU for effectiveness (because our
per CPU work is based on netdevs).
In virtual environments, the supervisor will be in charge of the
bond-like parent device.
Another model is creating a child netdev based on more than one ring
example 2 tx and 2 rcv rings for two netdevices etc.

>  The scenario I see is to support parallelism in forwarding/firewalling etc.
>  For example when RX load via HW gets split into different CPU's and for 
>  cache reasons we want to process in same CPU even with TX.
> 
>  If RX HW split keeps packets from the same flow on same CPU we shouldn't
>  get reordering within flows.

For the Leonid-NIC (for lack of better name) it may be harder to do
parallelization on rcv if you use what i said above. But you could
use a different model on receive - such as create a single netdev and
with 8 rcv rings and MSI tied on rcv to 8 different CPUs 
Anyways, it is an important discussion to have. ttl.

cheers,
jamal
 

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ