lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18032.4838.904566.652746@robur.slu.se>
Date:	Wed, 13 Jun 2007 17:53:10 +0200
From:	Robert Olsson <Robert.Olsson@...a.slu.se>
To:	hadi@...erus.ca
Cc:	Robert Olsson <Robert.Olsson@...a.slu.se>,
	Zhu Yi <yi.zhu@...el.com>,
	Leonid Grossman <Leonid.Grossman@...erion.com>,
	Patrick McHardy <kaber@...sh.net>,
	David Miller <davem@...emloft.net>,
	peter.p.waskiewicz.jr@...el.com, netdev@...r.kernel.org,
	jeff@...zik.org, auke-jan.h.kok@...el.com
Subject: Re: [PATCH] NET: Multiqueue network device support.


jamal writes:

 > I think the one described by Leonid has not just 8 tx/rx rings but also
 > a separate register set, MSI binding etc iirc. The only shared resources
 > as far as i understood Leonid are the bus and the ethernet wire.
 
 AFAIK most new NIC will look like this...  

 I still lack a lot of crucial hardware understanding
 
 What will happen when if we for some reason is not capable of serving
 one TX ring? NIC still working so we continue filling/sending/clearing 
 on other rings?

 > So in such a case (assuming 8 rings), 
 > One model is creating 4 netdev devices each based on single tx/rx ring
 > and register set and then having a mother netdev (what you call the
 > bond) that feeds these children netdev based on some qos parametrization
 > is very sensible. Each of the children netdevices (by virtue of how we
 > do things today) could be tied to a CPU for effectiveness (because our
 > per CPU work is based on netdevs).

 Some kind of supervising function for the TX is probably needed as we still 
 want see the device as one entity. But if upcoming HW supports parallelism  
 straight to the TX-ring we of course like to use to get mininal cache 
 effects. It's up to how this "master netdev" or queue superviser can be 
 designed.
  
 > For the Leonid-NIC (for lack of better name) it may be harder to do
 > parallelization on rcv if you use what i said above. But you could
 > use a different model on receive - such as create a single netdev and
 > with 8 rcv rings and MSI tied on rcv to 8 different CPUs 

 Yes that should be the way do it... and ethtool or something to hint 
 the NIC how the incoming data classified wrt available CPU's. Maybe
 something more dynamic for the brave ones.

 Cheers
					-ro
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ