lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4f927158-3b84-c2e1-77d6-c616139e5766@intel.com>
Date:   Fri, 31 Mar 2023 03:01:18 +0530
From:   "Linga, Pavan Kumar" <pavan.kumar.linga@...el.com>
To:     Paul Menzel <pmenzel@...gen.mpg.de>
CC:     <willemb@...gle.com>, <netdev@...r.kernel.org>, <decot@...gle.com>,
        <shiraz.saleem@...el.com>, <intel-wired-lan@...ts.osuosl.org>
Subject: Re: [Intel-wired-lan] [PATCH net-next 00/15] Introduce IDPF driver



On 3/29/2023 9:11 PM, Paul Menzel wrote:
> Dear Pavan,
> 
> 
> Thank you very much for the new driver. It’s a lot of code. ;-)
> 
> Am 29.03.23 um 16:03 schrieb Pavan Kumar Linga:
>> This patch series introduces the Infrastructure Data Path Function (IDPF)
>> driver. It is used for both physical and virtual functions. Except for
>> some of the device operations the rest of the functionality is the same
>> for both PF and VF. IDPF uses virtchnl version2 opcodes and structures
>> defined in the virtchnl2 header file which helps the driver to learn
>> the capabilities and register offsets from the device Control Plane (CP)
>> instead of assuming the default values.
>>
>> The format of the series follows the driver init flow to interface open.
>> To start with, probe gets called and kicks off the driver initialization
>> by spawning the 'vc_event_task' work queue which in turn calls the
>> 'hard reset' function. As part of that, the mailbox is initialized which
>> is used to send/receive the virtchnl messages to/from the CP. Once 
>> that is
>> done, 'core init' kicks in which requests all the required global 
>> resources
>> from the CP and spawns the 'init_task' work queue to create the vports.
>>
>> Based on the capability information received, the driver creates the said
>> number of vports (one or many) where each vport is associated to a 
>> netdev.
>> Also, each vport has its own resources such as queues, vectors etc.
>>  From there, rest of the netdev_ops and data path are added.
>>
>> IDPF implements both single queue which is traditional queueing model
>> as well as split queue model. In split queue model, it uses separate 
>> queue
>> for both completion descriptors and buffers which helps to implement
>> out-of-order completions. It also helps to implement asymmetric queues,
>> for example multiple RX completion queues can be processed by a single
>> RX buffer queue and multiple TX buffer queues can be processed by a
>> single TX completion queue. In single queue model, same queue is used
>> for both descriptor completions as well as buffer completions. It also
>> supports features such as generic checksum offload, generic receive
>> offload (hardware GRO) etc.
> 
> […]
> 
> Can you please elaborate on how the driver can be tested, and if tests 
> are added to automatically test the driver?
> 
> 
Not really sure on what tests are you referring to. Can you please 
elaborate on that part? We are looking into ways to provide remote 
access to the HW but don't have anything currently available. Will 
provide more details once that is sorted.


> Kind regards,
> 
> Paul

Thanks,
Pavan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ