[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ3xEMj79J0F9dpBNNLGzq1Mub7ig1itv7_Wgpsi6PxVgmRqsg@mail.gmail.com>
Date: Tue, 4 Sep 2018 23:37:29 +0300
From: Or Gerlitz <gerlitz.or@...il.com>
To: Jakub Kicinski <jakub.kicinski@...ronome.com>
Cc: Florian Fainelli <f.fainelli@...il.com>,
Simon Horman <simon.horman@...ronome.com>,
Andy Gospodarek <andy@...yhouse.net>,
"mchan@...adcom.com" <mchan@...adcom.com>,
Jiri Pirko <jiri@...nulli.us>,
Alexander Duyck <alexander.duyck@...il.com>,
Frederick Botha <frederick.botha@...ronome.com>,
nick viljoen <nick.viljoen@...ronome.com>,
Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: phys_port_id in switchdev mode?
On Tue, Sep 4, 2018 at 1:20 PM, Jakub Kicinski
<jakub.kicinski@...ronome.com> wrote:
> On Mon, 3 Sep 2018 12:40:22 +0300, Or Gerlitz wrote:
>> On Tue, Aug 28, 2018 at 9:05 PM, Jakub Kicinski wrote:
>> > Hi!
>>
>> Hi Jakub and sorry for the late reply, this crazigly hot summer refuses to die,
>>
>> Note I replied couple of minutes ago but it didn't get to the list, so
>> lets take it from this one:
>>
>> > I wonder if we can use phys_port_id in switchdev to group together
>> > interfaces of a single PCI PF? Here is the problem:
>> >
>> > With a mix of PF and VF interfaces it gets increasingly difficult to
>> > figure out which one corresponds to which PF. We can identify which
>> > *representor* is which, by means of phys_port_name and devlink
>> > flavours. But if the actual VF/PF interfaces are also present on the
>> > same host, it gets confusing when one tries to identify the PF they
>> > came from. Generally one has to resort of matching between PCI DBDF of
>> > the PF and VFs or read relevant info out of ethtool -i.
>> >
>> > In multi host scenario this is particularly painful, as there seems to
>> > be no immediately obvious way to match PCI interface ID of a card (0,
>> > 1, 2, 3, 4...) to the DBDF we have connected.
>> >
>> > Another angle to this is legacy SR-IOV NDOs. User space picks a netdev
>> > from /sys/bus/pci/$VF_DBDF/physfn/net/ to run the NDOs on in somehow
>> > random manner, which means we have to provide those for all devices with
>> > link to the PF (all reprs). And we have to link them (a) because it's
>> > right (tm) and (b) to get correct naming.
>>
>> wait, as you commented in later, not only the mellanox vf reprs but rather also
>> the nfp vf reprs are not linked to the PF, because ip link output
>> grows quadratically.
>
> Right, correct. If we set phys_port_id libvirt will reliably pick the
> correct netdev to run NDOs on (PF/PF repr) so we can remove them from
> the other netdevs and therefore limit the size of ip link show output.
just to make sure, this is suggested/future not existing flow of libvirt?
> Ugh, you're right! Libvirt is our primary target here. IIUC we need
> phys_port_id on the actual VF and then *a* netdev linked to physfn in
> sysfs which will have the legacy NDOs.
>
> We can't set the phys_port_id on the VF reprs because then we're back
> to the problem of ip link output growing. Perhaps we shouldn't set it
> on PF repr either?
>
> Let's make a table (assuming bare metal cloud scenario where Host0 is
> controlling the network, while Host1 is the actual server):
yeah, this would be a super-set the non-smartnic case where
we have only one host.
[...]
> With this libvirt on Host0 should easily find the actual PF0 netdev to
> run the NDO on, if it wants to use VFs:
> - libvrit finds act VF0/0 to plug into the VM;
> - reads its phys_port_id -> "PF0 SN";
> - finds netdev with "PF0 SN" linked to physfn -> "act PF0";
> - runs NDOs on "act PF0" for PF0's VF correctly.
What you describe here doesn't seem to be networking
configuration, as it deals only with VFs and PF but not with reprs,
and hence AFAIK runs on host host1
[...]
> Should Host0 in bare metal cloud have access to SR-IOV NDOs of Host1?
I need to think on that
Powered by blists - more mailing lists