[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ3xEMhJckJq6HDFm_QTtDP_SG1jPJ55q1b-_Vg0WoC_UqO_Wg@mail.gmail.com>
Date: Thu, 24 May 2018 22:26:03 +0300
From: Or Gerlitz <gerlitz.or@...il.com>
To: Jakub Kicinski <jakub.kicinski@...ronome.com>
Cc: David Miller <davem@...emloft.net>,
Linux Netdev List <netdev@...r.kernel.org>,
oss-drivers@...ronome.com, Jiri Pirko <jiri@...nulli.us>,
Jay Vosburgh <j.vosburgh@...il.com>,
Veaceslav Falico <vfalico@...il.com>,
Andy Gospodarek <andy@...yhouse.net>
Subject: Re: [PATCH net-next 0/8] nfp: offload LAG for tc flower egress
On Thu, May 24, 2018 at 9:49 PM, Jakub Kicinski
<jakub.kicinski@...ronome.com> wrote:
> On Thu, 24 May 2018 20:04:56 +0300, Or Gerlitz wrote:
>> Does this apply also to non-uplink representors? if yes, what is the use case?
>>
>> We are looking on supporting uplink lag in sriov switchdev scheme - we refer to
>> it as "vf lag" -- b/c the netdev and rdma devices seen by the VF are actually
>> subject to HA and/or LAG - I wasn't sure if/how you limit this series
>> to uplink reprs
>
> I don't think we have a limitation on the output port within the LAG.
> But keep in mind in our devices all ports belong to the same eswitch/PF
> so bonding uplink ports is generally sufficient, I'm not sure VF
> bonding adds much HA. IOW AFAIK we support VF bonding because HW can do
> it easily, not because we have a strong use case for it.
To make it clear, vf lag is code name for uplink lag, I think we want
to say that
we provide the VM a lagged VF, anyway, again, the lag is done on the uplink reps
not on the vf reps. Unlike the uplink port which is physical one, the
vf vport is virtual
one, what could be the benefit to bond two vports?
Powered by blists - more mailing lists