[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7F861DC0615E0C47A872E6F3C5FCDDBD05E9DD71@BPXM14GP.gisp.nec.co.jp>
Date: Mon, 11 May 2015 23:55:21 +0000
From: Hiroshi Shimamoto <h-shimamoto@...jp.nec.com>
To: "Skidmore, Donald C" <donald.c.skidmore@...el.com>,
Or Gerlitz <gerlitz.or@...il.com>,
"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>
CC: David Miller <davem@...emloft.net>,
Linux Netdev List <netdev@...r.kernel.org>,
"nhorman@...hat.com" <nhorman@...hat.com>,
"sassmann@...hat.com" <sassmann@...hat.com>,
"jogreene@...hat.com" <jogreene@...hat.com>,
"Choi, Sy Jong" <sy.jong.choi@...el.com>,
Edward Cree <ecree@...arflare.com>,
Rony Efraim <ronye@...lanox.com>
Subject: RE: [net-next 07/11] if_link: Add VF multicast promiscuous control
> > > -----Original Message-----
> > > From: Hiroshi Shimamoto [mailto:h-shimamoto@...jp.nec.com]
> > > Sent: Wednesday, May 06, 2015 10:55 PM
> > > To: Skidmore, Donald C; Or Gerlitz; Kirsher, Jeffrey T
> > > Cc: David Miller; Linux Netdev List; nhorman@...hat.com;
> > > sassmann@...hat.com; jogreene@...hat.com; Choi, Sy Jong; Edward Cree;
> > > Rony Efraim
> > > Subject: RE: [net-next 07/11] if_link: Add VF multicast promiscuous control
> > >
> > > > > -----Original Message-----
> > > > > From: Or Gerlitz [mailto:gerlitz.or@...il.com]
> > > > > Sent: Sunday, May 03, 2015 7:16 AM
> > > > > To: Kirsher, Jeffrey T
> > > > > Cc: David Miller; Hiroshi Shimamoto; Linux Netdev List;
> > > > > nhorman@...hat.com; sassmann@...hat.com; jogreene@...hat.com;
> > > Choi,
> > > > > Sy Jong; Edward Cree; Skidmore, Donald C; Rony Efraim
> > > > > Subject: Re: [net-next 07/11] if_link: Add VF multicast promiscuous
> > > > > control
> > > > >
> > > > > On Sat, May 2, 2015 at 1:42 PM, Jeff Kirsher
> > > > > <jeffrey.t.kirsher@...el.com>
> > > > > wrote:
> > > > > > From: Hiroshi Shimamoto <h-shimamoto@...jp.nec.com>
> > > > > >
> > > > > > Add netlink directives and ndo entry to allow VF multicast
> > > > > > promiscuous
> > > > > mode.
> > > > > >
> > > > > > This controls the permission to enter VF multicast promiscuous mode.
> > > > > > The administrator will dedicatedly allow multicast promiscuous per VF.
> > > > > >
> > > > > > When the VF is under multicast promiscuous mode, all multicast
> > > > > > packets are sent to the VF.
> > > > > >
> > > > > > Don't allow VF multicast promiscuous if the VM isn't fully trusted.
> > > > >
> > > > > Guys,
> > > > >
> > > > > I don't think the discussion we held in the past [1] on the matter
> > > > > actually converged. Few open points that came up while debating it
> > > > > internally with
> > > > > Rony:
> > > > >
> > > > > 1. maybe what we we actually want here an API that states a VF to be
> > > > > privileged/trusted and then we can over load the feature set of being
> > > such?
> > > >
> > > > I suggested this originally, but there was push back as it was thought too
> > > generic as the definition of what being a "trusted"
> > > > vendor would differ from driver to driver. Personally I still like the idea of
> > > having one mode saying that we "trust"
> > > > a given VF. Then that VF can request whatever it support it wants
> > > > from the PF regardless of possible negative impact on other VF's.
> > > > What is possible to support would then be left to the interface
> > > > between the VF and PF. This of course would be dependent on what the
> > > given HW could support and would mean this mode would mean different
> > > things for different adapters and I do see why some might see this as a
> > > concern.
> > >
> > > The point is granularity, right?
> > > Allow everything or allow subset of features.
> >
> > Nice way to sum it up. The trick being with the subset of features path is not all hardware can/will support everything.
> > Also I worry about worry about the feature list growing requiring more and more nobs on the PF to allow/disallow granular
> > behavior that could brake VF isolation. With a simple hint to the PF that a given VF is "trusted" would allow all that
> > complexity to be contained in the mailbox protocol between the PF/VF.
> >
> > All that said I realize others are concerned with the ambiguousness of such a field and can certainly live with your
> > implementation.
>
> I see, it seems better to have a single knob which indicates "trust this VF" and PF will allow requests which
> might hurt performance or security from trusted VF, instead of creating a knob for multicast promiscuous,
> a knob for feature X and so on.
>
> I will make a patch to implement that "trusted knob" instead of allowing MC promiscuous.
> Is there any comment?
Any comments?
Is that the way to go ahead this series?
thanks,
Hiroshi
>
> >
> > >
> > > >
> > > > >
> > > > > 2. the suggested API only allows either unlimited number of mulicast
> > > > > groups per VF or limited number, both numbers are vendor dependent,
> > > right?
> > > > > maybe what we need for this specific matter is specifying how many
> > > > > multicast groups are allowed for a VF?
> > > >
> > > > I believe the idea behind this interface was that it would allow VF's
> > > > to request unlimited multicast group as opposed to the current
> > > > behavior of each adapter offering some limited number. This limit is
> > > > of course defined by a given adapters HW/SW limitations. Up until now
> > > > you could keep asking for new multicast until the PF replied with an
> > > > error. So we never really exported this information before. This new
> > > mode just allows us to never reach the point that the PF would deny a VF
> > > request to join a MC group. Seems to me that an additional interface to
> > > provide the max number of supported multicast groups would be new
> > > functionality that could be independent of this patch and in fact could exist
> > > even without this patch.
> > > > Or am I missing what you're asking for here. :)
> > >
> > > I think that the current limitation of multicast on ixgbevf comes from the
> > > implementation of mailbox API between VF and PF which has 32 words.
> > >
> >
> > In the end the limit on number of MC groups, if you don't use promiscuous mode, is the size of the multicast table array.
> > We could be sharing this table better between all users rather than the arbitrary limit, but you would hit a hard limit
> > due to the size of the table.
>
> Just to clarify the current implementation, I think there is no hard limit of MC address.
> The ixgbe driver uses the MTA (multicast table array) and the MTA is shared among all VFs. VF requests
> to register 30 multicast address hash values at most and PF will set the corresponding bit in the MTA.
> When multicast packet comes in, NIC checks the MTA bit and transmit the packet every VF. 82599 uses the
> 12 bits hash value of MC address and there is the MTA which is 4096 bits array, the MTA covers all MC
> address to filter. I think if all bits on in the MTA, it means that no MC packet is dropped.
>
> thanks,
> Hiroshi
>
> >
> > > By the way, our requirement is to make VF promiscuous mode for SDN/NFV
> > > usage.
> > > And there is a feature to enable in HW, we'd like to use it.
> > > I know there is a possibility of performance degradation.
> > >
> > > thanks,
> > > Hiroshi
> >
> > I think your method is the way to go, in that if you ask for more than we allow per VF and the PF has this ability enabled
> > we just put the VF into multicast promiscuous mode. However I don't see the advantage of having an interface to tell
> > how many groups need to be requested before this happens. If you were worried about the performance degradation of
> entering
> > promiscuous multicast don't allow it in the PF, which of course will be the default.
> >
> > Thanks,
> > -Don Skidmore <donald.c.skidmore@...el.com>
> >
> >
Powered by blists - more mailing lists