[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<MW4PR18MB524447305F8F49847D88A6E2A60A2@MW4PR18MB5244.namprd18.prod.outlook.com>
Date: Sun, 14 Apr 2024 12:32:18 +0000
From: Vamsi Krishna Attunuru <vattunuru@...vell.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
CC: Arnd Bergmann <arnd@...db.de>,
"linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>,
Jerin Jacob <jerinj@...vell.com>
Subject: RE: [EXTERNAL] Re: [PATCH v5 1/1] misc: mrvl-cn10k-dpi: add Octeon
CN10K DPI administrative driver
> -----Original Message-----
> From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
> Sent: Sunday, April 14, 2024 3:16 PM
> To: Vamsi Krishna Attunuru <vattunuru@...vell.com>
> Cc: Arnd Bergmann <arnd@...db.de>; linux-kernel@...r.kernel.org; Jerin
> Jacob <jerinj@...vell.com>
> Subject: Re: [EXTERNAL] Re: [PATCH v5 1/1] misc: mrvl-cn10k-dpi: add
> Octeon CN10K DPI administrative driver
>
> On Sun, Apr 14, 2024 at 09:33:37AM +0000, Vamsi Krishna Attunuru wrote:
> >
> >
> > > -----Original Message-----
> > > From: Arnd Bergmann <arnd@...db.de>
> > > Sent: Sunday, April 14, 2024 12:41 AM
> > > To: Vamsi Krishna Attunuru <vattunuru@...vell.com>; Greg
> > > Kroah-Hartman <gregkh@...uxfoundation.org>
> > > Cc: linux-kernel@...r.kernel.org; Jerin Jacob <jerinj@...vell.com>
> > > Subject: Re: [EXTERNAL] Re: [PATCH v5 1/1] misc: mrvl-cn10k-dpi: add
> > > Octeon CN10K DPI administrative driver
> > >
> > > On Sat, Apr 13, 2024, at 18:17, Vamsi Krishna Attunuru wrote:
> > > > From: Greg KH <gregkh@...uxfoundation.org>
> > > >> On Sat, Apr 13, 2024 at 10:58:37AM +0000, Vamsi Krishna Attunuru
> wrote:
> > > >> > From: Greg KH <gregkh@...uxfoundation.org>
> > > >> >
> > > >> > No, it's a normal PCIe sriov capability implemented in all
> > > >> > sriov capable PCIe
> > > >> devices.
> > > >> > Our PF device aka this driver in kernel space service mailbox
> > > >> > requests from userspace applications via VF devices. For
> > > >> > instance, DPI VF device from user space writes into mailbox
> > > >> > registers and the DPI hardware
> > > >> triggers an interrupt to DPI PF device.
> > > >> > Upon PF interrupt, this driver services the mailbox requests.
> > > >>
> > > >> Isn't that a "normal" PCI thing? How is this different from
> > > >> other devices that have VF?
> > > >
> > > > Looks like there is a lot of confusion for this device. Let me
> > > > explain There are two aspects for this DPI PF device.
> > > > a) It's a PCIe device so it is "using" some of the PCI services
> > > > provided PCIe HW or PCI subsystem
> > > > b) It is "providing" non PCIe service(DPI HW administrative
> > > > function) by using (a) Let me enumerate PF device operations with
> above aspects.
> > > > 1) Means to create VF(s) from PF. It's category (a) service and
> > > > driver uses API (pci_sriov_configure_simple()) from PCI subsystem
> > > > to implement it.
> > > > 2) Means to get the interrupt(mailbox or any device specific
> > > > interrupt). It's category (a) service and driver uses API
> > > > (pci_alloc_irq_vectors()) from PCI subsystem to implement it.
> > > > 3) Means to get the mailbox content from VF by using (2). It's
> > > > category
> > > > (b) service. This service is not part of PCI specification.
> > > > DPI PF device has the mailbox registers(DPI_MBOX_PF_VF_DATA
> > > registers)
> > > > in its PCIe BAR space which are device specific.
> > > > 4) Upon receiving DPI HW administrative function mailbox request,
> > > > service it. Its category (b) service. This service is not part of
> > > > PCI specification.
> > > > For instance, dpi_queue_open & close are requests sent from DPI VF
> > > > device to DPI PF device for setting up the DPI VF queue resources.
> > > > Once its setup by DPI PF, then DPI VF device can use these queues.
> > > > These queues are not part of PCIe specification. These queues are
> > > > used for making DMA by DPI VF device/driver.
> > >
> > > It's not directly my area either, but as far as I can tell from
> > > reading the competing sr-iov based device drivers, these seem to
> > > handle all of the above in the network driver that owns the PF
> > > rather than a separate driver, e.g. for the first point:
> > >
> > > $ git grep -w sriov_configure.= drivers/net/
> > > drivers/net/ethernet/amazon/ena/ena_netdev.c: .sriov_configure =
> > > pci_sriov_configure_simple,
> > > drivers/net/ethernet/amd/pds_core/main.c: .sriov_configure =
> > > pdsc_sriov_configure,
> > > drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c:
> .sriov_configure =
> > > bnx2x_sriov_configure,
> > > drivers/net/ethernet/broadcom/bnxt/bnxt.c: .sriov_configure =
> > > bnxt_sriov_configure,
> > > drivers/net/ethernet/cavium/liquidio/lio_main.c: .sriov_configure =
> > > liquidio_enable_sriov,
> > > drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c: .sriov_configure =
> > > cxgb4_iov_configure,
> > > drivers/net/ethernet/emulex/benet/be_main.c: .sriov_configure =
> > > be_pci_sriov_configure,
> > > drivers/net/ethernet/freescale/enetc/enetc_pf.c: .sriov_configure =
> > > enetc_sriov_configure,
> > > drivers/net/ethernet/fungible/funeth/funeth_main.c: .sriov_configure
> =
> > > funeth_sriov_configure,
> > > drivers/net/ethernet/hisilicon/hns3/hns3_enet.c: .sriov_configure =
> > > hns3_pci_sriov_configure,
> > > drivers/net/ethernet/huawei/hinic/hinic_main.c: .sriov_configure =
> > > hinic_pci_sriov_configure,
> > > drivers/net/ethernet/intel/fm10k/fm10k_pci.c: .sriov_configure =
> > > fm10k_iov_configure,
> > > drivers/net/ethernet/intel/i40e/i40e_main.c: .sriov_configure =
> > > i40e_pci_sriov_configure,
> > > drivers/net/ethernet/intel/ice/ice_main.c: .sriov_configure =
> > > ice_sriov_configure,
> > > drivers/net/ethernet/intel/idpf/idpf_main.c: .sriov_configure =
> > > idpf_sriov_configure,
> > > drivers/net/ethernet/intel/igb/igb_main.c: .sriov_configure =
> > > igb_pci_sriov_configure,
> > > drivers/net/ethernet/intel/ixgbe/ixgbe_main.c: .sriov_configure =
> > > ixgbe_pci_sriov_configure,
> > > drivers/net/ethernet/marvell/octeon_ep/octep_main.c:
> .sriov_configure =
> > > octep_sriov_configure,
> > > drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c: .sriov_configure
> =
> > > otx2_sriov_configure
> > > drivers/net/ethernet/netronome/nfp/nfp_main.c: .sriov_configure
> =
> > > nfp_pcie_sriov_configure,
> > > drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c: .sriov_configure
> =
> > > ionic_sriov_configure,
> > > drivers/net/ethernet/qlogic/qede/qede_main.c: .sriov_configure =
> > > qede_sriov_configure,
> > > drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c: .sriov_configure =
> > > qlcnic_pci_sriov_configure,
> > > drivers/net/ethernet/sfc/ef10.c: .sriov_configure =
> > > efx_ef10_sriov_configure,
> > > drivers/net/ethernet/sfc/ef100.c: .sriov_configure =
> > > ef100_pci_sriov_configure,
> > > drivers/net/ethernet/sfc/ef100_nic.c: .sriov_configure =
> > > IS_ENABLED(CONFIG_SFC_SRIOV) ?
> > > drivers/net/ethernet/sfc/efx.c: .sriov_configure =
> efx_pci_sriov_configure,
> > > drivers/net/ethernet/sfc/siena/efx.c: .sriov_configure =
> > > efx_pci_sriov_configure,
> > > drivers/net/ethernet/sfc/siena/siena.c: .sriov_configure =
> > > efx_siena_sriov_configure,
> > >
> > > In what way is your hardware different from all the others?
> >
> > All of above devices are network devices which implements struct
> net_device_ops.
> > i.e Those PCI devices are networking devices which are capable of
> sending/receiving network packets.
> > This device doesn't have networking functionality to implement struct
> > net_device_ops, It's a simple PCIe PF device enables it's VFs and services
> any mailbox requests.
>
> What driver handles the "mailbox requests"? What are these requests for?
I think, I have already mentioned this in the previous reply. Copying the same here. Please see (4) in [1].
The DPI PF driver(this driver) handles the mailbox request from DPI VF (a userspace driver implemented using vfio uapis).
DPI PF driver(this driver) does not fit to any of the device class(ethernet, crypto, dma..) in linux kernel hence making it
as misc device driver. Standard device class always will have standard device function. For example,
ethernet device to send/receive ethernet frames, crypto device to enable crypto transformations,
dma device to enable copying data from source to destination memory.
Now, Why HW designers choose to have DPI(DMA Engine) PF device in first place?
a) DPI VF(not DPI PF) device has capability to do the DMA (copying data from source to destination memory).
b) In order to do DMA, DPI VF device needs a DPI queue.
c) Now here is the catch, when VF device starts, it does not have its queues configured. So, DPI VF device/driver
asks(via mailbox) DPI PF(this driver) for the queue setup with required configuration. dpi_queue_open() does the same.
d) Now you may ask, why VF device does NOT configure its queues on its own. That is HW resources
provision optimization(introduced by HW designers), where DMA engines are provisioned across the VF device queues.
So, PF (administrative) arbitrates the request from different VF devices via mailbox and allow to configure _global_ resources
which does not belong to VF.
Hope it clarifies.
[1]
------------------------------
Looks like there is a lot of confusion for this device. Let me explain
There are two aspects for this DPI PF device.
a) It's a PCIe device so it is "using" some of the PCI services provided PCIe HW or PCI subsystem
b) It is "providing" non PCIe service(DPI HW administrative function) by using (a)
Let me enumerate PF device operations with above aspects.
1) Means to create VF(s) from PF. It's category (a) service and driver uses API (pci_sriov_configure_simple()) from PCI subsystem to implement it.
2) Means to get the interrupt(mailbox or any device specific interrupt). It's category (a) service and driver uses API (pci_alloc_irq_vectors()) from PCI subsystem to implement it.
3) Means to get the mailbox content from VF by using (2). It's category (b) service. This service is not part of PCI specification.
DPI PF device has the mailbox registers(DPI_MBOX_PF_VF_DATA registers) in its PCIe BAR space which are device specific.
4) Upon receiving DPI HW administrative function mailbox request, service it. Its category (b) service. This service is not part of PCI specification.
For instance, dpi_queue_open & close are requests sent from DPI VF device to DPI PF device for setting up the DPI VF queue resources. Once its setup by DPI PF,
then DPI VF device can use these queues. These queues are not part of PCIe specification. These queues are used for making DMA by DPI VF device/driver.
-------------------------------
>
> thanks,
>
> greg k-h
Powered by blists - more mailing lists