[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <AM6PR05MB51423540AF9A93ED4B607735C5500@AM6PR05MB5142.eurprd05.prod.outlook.com>
Date: Tue, 17 Dec 2019 19:06:38 +0000
From: Yuval Avnery <yuvalav@...lanox.com>
To: Jakub Kicinski <jakub.kicinski@...ronome.com>
CC: Jiri Pirko <jiri@...lanox.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andy Gospodarek <andy@...yhouse.net>,
Daniel Jurgens <danielj@...lanox.com>,
Parav Pandit <parav@...lanox.com>
Subject: RE: [PATCH net-next] netdevsim: Add max_vfs to bus_dev
Jacob,
I doubled check, use case is not VF MAC. But for setting PF MAC.
We are talking about bare metal cloud machine, where the whole host is given to the customer.
So you have no control over what is running there.
For this scenario the cloud orchestration SW wants to distribute MAC addresses to the hosts's PFs.
So you can consider the PF as a "VF" and the control CPU is the "real" PF here.
There is nothing cloud related running on the host.
The reason we extended it to support VFs, is to a have a unified API that will work
in smartnic and non-smartnic modes.
Thus replacing "ip link vf set" in non-smartnic mode.
Furthermore, when considering min/max_rate groups for the same bare metal scenario,
It doesn't make sense to implement it for PF in devlink for this scenario,
and in "ip link" for any other scenarios (running from host).
So again, single user API which does not depend on the PCI topology.
Ideally, if "ip link" wasn't PCI dependent, we could have added it there.
As for your concern about ip link getting errors on the host.
it is configurable (NVCFG) whether to allow host to configure MACs and other attributes.
So is up to the cloud administrator.
> -----Original Message-----
> From: Yuval Avnery
> Sent: Monday, December 16, 2019 2:53 PM
> To: Jakub Kicinski <jakub.kicinski@...ronome.com>
> Cc: Jiri Pirko <jiri@...lanox.com>; davem@...emloft.net;
> netdev@...r.kernel.org; linux-kernel@...r.kernel.org; Andy Gospodarek
> <andy@...yhouse.net>; Daniel Jurgens <danielj@...lanox.com>; Parav
> Pandit <parav@...lanox.com>
> Subject: RE: [PATCH net-next] netdevsim: Add max_vfs to bus_dev
>
>
>
> > -----Original Message-----
> > From: Jakub Kicinski <jakub.kicinski@...ronome.com>
> > Sent: Monday, December 16, 2019 12:45 PM
> > To: Yuval Avnery <yuvalav@...lanox.com>
> > Cc: Jiri Pirko <jiri@...lanox.com>; davem@...emloft.net;
> > netdev@...r.kernel.org; linux-kernel@...r.kernel.org; Andy Gospodarek
> > <andy@...yhouse.net>; Daniel Jurgens <danielj@...lanox.com>
> > Subject: Re: [PATCH net-next] netdevsim: Add max_vfs to bus_dev
> >
>
> > The ip-link API will suddenly start returning errors which may not be
> > expected to the user space. So the question is what the user space is
> > you're expecting to run/testing with? _Some_ user space should prove
> > this design out before we merge it.
> >
> > The alternative design is to "forward" hosts ip-link requests to the
> > NIC CPU and let software running there talk to the cloud back end.
> > Rather than going
> > customer -> could API -> NIC,
> > go
> > customer -> NIC -> cloud API
> > That obviously is more complex, but has the big advantage of nothing
> > on the host CPU having to change.
>
> I will try to summarize your comments:
> 1. There will always be encapsulation, therefore network management
> shouldn't care what MACs customers use.
> 2. Customer is always requesting MAC, it never simply acquires it from the
> NIC.
> There is always going to be an entity running on the host setting MACs to
> VFs.
>
> Is that correct?
>
>
>
>
Powered by blists - more mailing lists