[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <D5C1322C3E673F459512FB59E0DDC3290501D305@orsmsx414.amr.corp.intel.com>
Date: Fri, 2 May 2008 13:18:33 -0700
From: "Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>
To: "Andi Kleen" <andi@...stfloor.org>
Cc: <jgarzik@...ox.com>, <netdev@...r.kernel.org>
Subject: RE: [ANNOUNCE] ixgbe: Data Center Bridging (DCB) support for ixgbe
> Probably the interface shouldn't be driver specific.
>
> I would suggest you post a higher level description of the
> goals/implementation etc. of the interface.
Agreed the netlink interface being driver-specific isn't very desirable.
I'm going to look more at the ethtool-netlink interface Jeff mentioned
in the meantime; but here is the basic need for the interface:
DCB allows configuration of both Tx and Rx parameters for bandwidth
control and priority flow control settings. It also has the ability to
group traffic classes into bandwidth groups, which can then have other
features turned on to control the way bandwidth is arbitrated within the
bandwidth group itself, and across bandwidth groups. Note that all of
these settings are in hardware, so we need an interface to the driver to
feed these configuration sets into the hardware. Originally we thought
ethtool, but the number of ioctls we would need to add in order to
support the dataset was pretty huge. So we chose to try using netlink,
and not pollute ethtool at this point.
We have a set of userspace tools that will be posted to Sourceforge
shortly. There is a daemon (dcbd) and a command-line tool (dcbtool).
The daemon is the code that implements the netlink interface into the
driver, and feeds the configuration sent from dcbtool or from its local
configuration file. dcbd also implements the Data Center Bridging
Exchange protocol, which is an LLDP-based protocol that allows DCB
devices to negotiate settings between link partners. The recently
announced Cisco Nexus switches run a DCB exchange service that
implements the protocol (I don't have the spec link for it, but it's a
joint spec from Intel, Cisco, and IBM). So our userspace tool
implements the protocol, performs all the negotiation with the link
partners, and sends the configuration changes to the driver via netlink.
I hope that gives a better understanding of how the driver interface
works. Please feel free to ask additional questions and give more
suggestions/feedback.
Cheers,
-PJ Waskiewicz
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists