lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 10 Sep 2009 13:28:14 +0200
From:	Patrick McHardy <kaber@...sh.net>
To:	Simon Horman <horms@...ge.net.au>
CC:	e1000-devel@...ts.sourceforge.net, netdev@...r.kernel.org
Subject: Re: igb bandwidth allocation configuration

Simon Horman wrote:
> Hi,
> 
> I have been looking into adding support the 82586's per-PF/VF
> bandwidth allocation to the igb driver. It seems that the trickiest
> part is working out how to expose things to user-space.
> 
> I was thinking along the lines of an ethtool option as follows:
> 
> 	ethtool --bandwidth ethN LIMIT...
> 
> 	where:
> 		* There is one LIMIT per PF/VF.
> 		  The 82576 can have up to 7 VFs per PF,
> 		  so there would be up to 8 LIMITS
> 		* A keyword (none?) can be used to denote that
> 		  bandwidth allocation should be disabled for the
> 		  corresponding VM
> 		* Otherwise LIMITS are in Megabits/s
> 
> This may get a bit combersome if there are a lot of VFs per PF,
> perhaps a better syntax would be:
> 
> 	ethtool --bandwidth ethN M=LIMIT...
> 
> 	where:
> 		* LIMIT is as above
> 		* M is some key to denote which VF/PF is
> 		  having its limit set.
> 
> Internally it seems that actually the limits are applied to HW Tx queues
> rather than directly VMs. There are 16 such queues. Accordingly it might
> be useful to design an interface to set limits per-queue using ethtool.
> But this would seem to also require exposing which queues are associated
> with which PF/VF.

Just an idea since I don't know much about this stuff:

Since we now have the mq packet scheduler, which exposes the device
queues as qdisc classes, how about adding driver-specific configuration
attributes that are passed to the driver by the mq scheduler? This
would allow to configure per-queue bandwidth limits using regular TC
commands and also use those limits without VFs for any kind of traffic.
Drivers not supporting this would refuse unsupported options.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ