[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <F6FB0E698C9B3143BDF729DF222866469129C09C@ORSMSX110.amr.corp.intel.com>
Date: Thu, 22 Jan 2015 21:34:11 +0000
From: "Skidmore, Donald C" <donald.c.skidmore@...el.com>
To: Bjørn Mork <bjorn@...k.no>
CC: David Laight <David.Laight@...LAB.COM>,
Hiroshi Shimamoto <h-shimamoto@...jp.nec.com>,
"e1000-devel@...ts.sourceforge.net"
<e1000-devel@...ts.sourceforge.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"Choi, Sy Jong" <sy.jong.choi@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Hayato Momma <h-momma@...jp.nec.com>
Subject: RE: [E1000-devel] [PATCH 1/2] if_link: Add VF multicast promiscuous
mode control
> -----Original Message-----
> From: Bjørn Mork [mailto:bjorn@...k.no]
> Sent: Thursday, January 22, 2015 11:32 AM
> To: Skidmore, Donald C
> Cc: David Laight; Hiroshi Shimamoto; e1000-devel@...ts.sourceforge.net;
> netdev@...r.kernel.org; Choi, Sy Jong; linux-kernel@...r.kernel.org;
> Hayato Momma
> Subject: Re: [E1000-devel] [PATCH 1/2] if_link: Add VF multicast promiscuous
> mode control
>
> "Skidmore, Donald C" <donald.c.skidmore@...el.com> writes:
>
> > My hang up is more related to: without the nob to enable it (off by
> > default) we are letting one VF dictate policy for all the other VFs
> > and the PF. If one VF needs to be in promiscuous multicast so is
> > everyone else. Their stacks now needs to deal with all the extra
> > multicast packets. As you point out this might not be a direct
> > concern for isolation in that the VM could have 'chosen' to join any
> > Multicast group and seen this traffic. My concern over isolation is
> > one VF has chosen that all the other VM now have to see this multicast
> > traffic.
>
> Apologies if this question is stupid, but I just have to ask about stuff I don't
> understand...
>
> Looking at the proposed implementation, the promiscous multicast flag
> seems to be a per-VF flag:
>
> +int ixgbe_ndo_set_vf_mc_promisc(struct net_device *netdev, int vf, bool
> +setting) {
> + struct ixgbe_adapter *adapter = netdev_priv(netdev);
> + struct ixgbe_hw *hw = &adapter->hw;
> + u32 vmolr;
> +
> + if (vf >= adapter->num_vfs)
> + return -EINVAL;
> +
> + adapter->vfinfo[vf].mc_promisc_enabled = setting;
> +
> + vmolr = IXGBE_READ_REG(hw, IXGBE_VMOLR(vf));
> + if (setting) {
> + e_info(drv, "VF %u: enabling multicast promiscuous\n", vf);
> + vmolr |= IXGBE_VMOLR_MPE;
> + } else {
> + e_info(drv, "VF %u: disabling multicast promiscuous\n", vf);
> + vmolr &= ~IXGBE_VMOLR_MPE;
> + }
> +
> + IXGBE_WRITE_REG(hw, IXGBE_VMOLR(vf), vmolr);
> +
> + return 0;
> +}
> +
>
> I haven't read the data sheet, but I took a quick look at the excellent high
> level driver docs:
> http://www.intel.com/content/dam/doc/design-guide/82599-sr-iov-driver-
> companion-guide.pdf
>
> It mentions "Multicast Promiscuous Enable" in its "Thoughts for
> Customization" section:
>
> 7.1 Multicast Promiscuous Enable
>
> The controller has provisions to allow each VF to be put into Multicast
> Promiscuous mode. The Intel reference driver does not configure this
> option .
>
> The capability can be enabled/disabled by manipulating the MPE field (bit
> 28) of the PF VF L2 Control Register (PFVML2FLT – 0x0F000)
>
> and showing a section from the data sheet describing the "PF VM L2 Control
> Register - PFVML2FLT[n] (0x0F000 + 4 * n, n=0...63; RW)"
>
> To me it looks like enabling Promiscuos Multicast for a VF won't affect any
> other VF at all. Is this really not the case?
>
>
>
> Bjørn
Clearly not a dumb question at all and I'm glad you mentioned that. :) I was going off the assumption, been awhile since I read the patch, that the patch was using FCTRL.MPE or MANC.MCST_PASS_L2 which would turn multicast promiscuous on for everyone. Since the patch is using PFVML2FLT.MPE this lessens my concern over effect on the entire system.
That said I still would prefer having a way to override this behavior on the PF, although I admit my argument is weaker. I'm still concerned about a VF changing the behavior of the PF without any way to prevent it. This might be one part philosophical (PF sets policy not the VF) but this still could have a noticeable effect on the overall system. If any other VFs (or the PF) are receiving MC packets these will have to be replicated which will be a performance hit. When we use the MC hash this is limited vs. when anyone is in MC promiscuous every MC packet used by another pool would be replicated. I could imagine in some environments (i.e. public clouds) where you don't trust what is running in your VM you might what to block this from happening.
In some ways it is almost the mirror image of the issue you brought up:
Adding a new hook for this seems over-complicated to me. And it still
doesn't solve the real problems that
a) the user has to know about this limit, and
b) manually configure the feature
My reverse argument might be that if this happens automatically. It might take the VM provider a long time to realize performance has taken a hit because some VM asked to join 31 multicast groups and entered MC promiscuous. Then only to find that they have no way to block such behavior.
Maybe I wouldn't as concerned if the patch author could provide some performance results to show this won't have as a negative effect as I'm afraid it might?
-Don Skidmore <donald.c.skidmore@...el.com>
Powered by blists - more mailing lists