[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <72691489-274c-8c3c-c897-08f74f413097@molgen.mpg.de>
Date: Wed, 29 Mar 2023 17:41:12 +0200
From: Paul Menzel <pmenzel@...gen.mpg.de>
To: Pavan Kumar Linga <pavan.kumar.linga@...el.com>
Cc: willemb@...gle.com, netdev@...r.kernel.org, decot@...gle.com,
shiraz.saleem@...el.com, intel-wired-lan@...ts.osuosl.org
Subject: Re: [Intel-wired-lan] [PATCH net-next 00/15] Introduce IDPF driver
Dear Pavan,
Thank you very much for the new driver. It’s a lot of code. ;-)
Am 29.03.23 um 16:03 schrieb Pavan Kumar Linga:
> This patch series introduces the Infrastructure Data Path Function (IDPF)
> driver. It is used for both physical and virtual functions. Except for
> some of the device operations the rest of the functionality is the same
> for both PF and VF. IDPF uses virtchnl version2 opcodes and structures
> defined in the virtchnl2 header file which helps the driver to learn
> the capabilities and register offsets from the device Control Plane (CP)
> instead of assuming the default values.
>
> The format of the series follows the driver init flow to interface open.
> To start with, probe gets called and kicks off the driver initialization
> by spawning the 'vc_event_task' work queue which in turn calls the
> 'hard reset' function. As part of that, the mailbox is initialized which
> is used to send/receive the virtchnl messages to/from the CP. Once that is
> done, 'core init' kicks in which requests all the required global resources
> from the CP and spawns the 'init_task' work queue to create the vports.
>
> Based on the capability information received, the driver creates the said
> number of vports (one or many) where each vport is associated to a netdev.
> Also, each vport has its own resources such as queues, vectors etc.
> From there, rest of the netdev_ops and data path are added.
>
> IDPF implements both single queue which is traditional queueing model
> as well as split queue model. In split queue model, it uses separate queue
> for both completion descriptors and buffers which helps to implement
> out-of-order completions. It also helps to implement asymmetric queues,
> for example multiple RX completion queues can be processed by a single
> RX buffer queue and multiple TX buffer queues can be processed by a
> single TX completion queue. In single queue model, same queue is used
> for both descriptor completions as well as buffer completions. It also
> supports features such as generic checksum offload, generic receive
> offload (hardware GRO) etc.
[…]
Can you please elaborate on how the driver can be tested, and if tests
are added to automatically test the driver?
Kind regards,
Paul
Powered by blists - more mailing lists