lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7F861DC0615E0C47A872E6F3C5FCDDBD05E07B7C@BPXM14GP.gisp.nec.co.jp>
Date:	Tue, 20 Jan 2015 23:40:05 +0000
From:	Hiroshi Shimamoto <h-shimamoto@...jp.nec.com>
To:	Bjørn Mork <bjorn@...k.no>
CC:	Alexander Duyck <alexander.duyck@...il.com>,
	"e1000-devel@...ts.sourceforge.net" 
	<e1000-devel@...ts.sourceforge.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"Choi, Sy Jong" <sy.jong.choi@...el.com>,
	Hayato Momma <h-momma@...jp.nec.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH 1/2] if_link: Add VF multicast promiscuous mode control

> Subject: Re: [PATCH 1/2] if_link: Add VF multicast promiscuous mode control
> 
> Hiroshi Shimamoto <h-shimamoto@...jp.nec.com> writes:
> 
> > From: Hiroshi Shimamoto <h-shimamoto@...jp.nec.com>
> >
> > Add netlink directives and ndo entry to control VF multicast promiscuous mode.
> >
> > Intel ixgbe and ixgbevf driver can handle only 30 multicast MAC addresses
> > per VF. It means that we cannot assign over 30 IPv6 addresses to a single
> > VF interface on VM. We want thousands IPv6 addresses in VM.
> >
> > There is capability of multicast promiscuous mode in Intel 82599 chip.
> > It enables all multicast packets are delivered to the target VF.
> >
> > This patch prepares to control that VF multicast promiscuous functionality.
> 
> Adding a new hook for this seems over-complicated to me.  And it still
> doesn't solve the real problems that
>  a) the user has to know about this limit, and
>  b) manually configure the feature
> 
> Most of us, lacking the ability to imagine such arbitrary hardware
> limitations, will go through a few hours of frustrating debugging before
> we figure this one out...
> 
> Why can't the ixgbevf driver just automatically signal the ixgbe driver
> to enable multicast promiscuous mode whenever the list grows past the
> limit?

I had submitted a patch to change ixgbe and ixgbevf driver for this issue.
https://lkml.org/lkml/2014/11/27/269

The previous patch introduces API between ixgbe and ixgbevf driver to
enable multicast promiscuous mode, and ixgbevf enables it automatically
if the number of addresses is over than 30.

I got some comment and I would like to clarify the point, but there was no
answer.
That's the reason I submitted this patch.

Do you think a patch for the ixgbe/ixgbevf driver is preferred?


thanks,
Hiroshi

> 
> I'd also like to note that this comment in
> drivers/net/ethernet/intel/ixgbevf/vf.c
> indicates that the author had some ideas about how more than 30
> addresses could/should be handled:
> 
> static s32 ixgbevf_update_mc_addr_list_vf(struct ixgbe_hw *hw,
> 					  struct net_device *netdev)
> {
> 	struct netdev_hw_addr *ha;
> 	u32 msgbuf[IXGBE_VFMAILBOX_SIZE];
> 	u16 *vector_list = (u16 *)&msgbuf[1];
> 	u32 cnt, i;
> 
> 	/* Each entry in the list uses 1 16 bit word.  We have 30
> 	 * 16 bit words available in our HW msg buffer (minus 1 for the
> 	 * msg type).  That's 30 hash values if we pack 'em right.  If
> 	 * there are more than 30 MC addresses to add then punt the
> 	 * extras for now and then add code to handle more than 30 later.
> 	 * It would be unusual for a server to request that many multi-cast
> 	 * addresses except for in large enterprise network environments.
> 	 */
> 
> 
> 
> The last 2 lines of that comment are of course totally bogus and
> pointless and should be deleted in any case...  It's obvious that 30
> multicast addresses is ridiculously low for lots of normal use cases.
> 
> 
> Bjørn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ