[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YUAUC1AJP6JVMxBr@unreal>
Date: Tue, 14 Sep 2021 06:16:27 +0300
From: Leon Romanovsky <leon@...nel.org>
To: "Ertman, David M" <david.m.ertman@...el.com>
Cc: "Saleem, Shiraz" <shiraz.saleem@...el.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"kuba@...nel.org" <kuba@...nel.org>,
"yongxin.liu@...driver.com" <yongxin.liu@...driver.com>,
"Nguyen, Anthony L" <anthony.l.nguyen@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"jgg@...pe.ca" <jgg@...pe.ca>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"Singhai, Anjali" <anjali.singhai@...el.com>,
"Parikh, Neerav" <neerav.parikh@...el.com>,
"Samudrala, Sridhar" <sridhar.samudrala@...el.com>
Subject: Re: [PATCH RESEND net] ice: Correctly deal with PFs that do not
support RDMA
On Mon, Sep 13, 2021 at 04:07:28PM +0000, Ertman, David M wrote:
> > -----Original Message-----
> > From: Saleem, Shiraz <shiraz.saleem@...el.com>
> > Sent: Monday, September 13, 2021 8:50 AM
> > To: Leon Romanovsky <leon@...nel.org>; Ertman, David M
> > <david.m.ertman@...el.com>
> > Cc: davem@...emloft.net; kuba@...nel.org; yongxin.liu@...driver.com;
> > Nguyen, Anthony L <anthony.l.nguyen@...el.com>;
> > netdev@...r.kernel.org; linux-kernel@...r.kernel.org; Brandeburg, Jesse
> > <jesse.brandeburg@...el.com>; intel-wired-lan@...ts.osuosl.org; linux-
> > rdma@...r.kernel.org; jgg@...pe.ca; Williams, Dan J
> > <dan.j.williams@...el.com>; Singhai, Anjali <anjali.singhai@...el.com>;
> > Parikh, Neerav <neerav.parikh@...el.com>; Samudrala, Sridhar
> > <sridhar.samudrala@...el.com>
> > Subject: RE: [PATCH RESEND net] ice: Correctly deal with PFs that do not
> > support RDMA
> >
> > > Subject: Re: [PATCH RESEND net] ice: Correctly deal with PFs that do not
> > > support RDMA
> > >
> > > On Thu, Sep 09, 2021 at 08:12:23AM -0700, Dave Ertman wrote:
> > > > There are two cases where the current PF does not support RDMA
> > > > functionality. The first is if the NVM loaded on the device is set to
> > > > not support RDMA (common_caps.rdma is false). The second is if the
> > > > kernel bonding driver has included the current PF in an active link
> > > > aggregate.
> > > >
> > > > When the driver has determined that this PF does not support RDMA,
> > > > then auxiliary devices should not be created on the auxiliary bus.
> > >
> > > This part is wrong, auxiliary devices should always be created, in your case it
> > will
> > > be one eth device only without extra irdma device.
> >
> > It is worth considering having an eth aux device/driver but is it a hard-and-
> > fast rule?
> > In this case, the RDMA-capable PCI network device spawns an auxiliary
> > device for RDMA
> > and the core driver is a network driver.
> >
> > >
> > > Your "bug" is that you mixed auxiliary bus devices with "regular" ones and
> > created
> > > eth device not as auxiliary one. This is why you are calling to
> > auxiliary_device_init()
> > > for RDMA only and fallback to non-auxiliary mode.
> >
> > It's a design choice on how you carve out function(s) off your PCI core device
> > to be
> > managed by auxiliary driver(s) and not a bug.
> >
> > Shiraz
>
> Also, regardless of whether netdev functionality is carved out into an auxiliary device or not, this code would still be necessary.
Right
>
> We don't want to carve out an auxiliary device to support a functionality that the base PCI device does not support. Not having
> the RDMA auxiliary device for an auxiliary driver to bind to is how we differentiate between devices that support RDMA and those
> that don't.
This is right too.
My complain is that you mixed enumerator logic with eth driver and
create auxiliary bus only if your RDMA device exists. It is wrong.
Thanks
>
> Thanks,
> DaveE
>
Powered by blists - more mailing lists