[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<PH0PR12MB5481A4D48BCECC6712CEA3ACDC0B2@PH0PR12MB5481.namprd12.prod.outlook.com>
Date: Sat, 13 Apr 2024 02:01:25 +0000
From: Parav Pandit <parav@...dia.com>
To: "Samudrala, Sridhar" <sridhar.samudrala@...el.com>, David Ahern
<dsahern@...nel.org>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"stephen@...workplumber.org" <stephen@...workplumber.org>
CC: Jiri Pirko <jiri@...dia.com>, Shay Drori <shayd@...dia.com>, Michal
Swiatkowski <michal.swiatkowski@...ux.intel.com>
Subject: RE: [PATCH v2 0/2] devlink: Support setting max_io_eqs
> From: Samudrala, Sridhar <sridhar.samudrala@...el.com>
> Sent: Saturday, April 13, 2024 3:33 AM
>
> On 4/12/2024 12:22 AM, Parav Pandit wrote:
> >
> >> From: Parav Pandit <parav@...dia.com>
> >> Sent: Friday, April 12, 2024 9:02 AM
> >>
> >> Hi David, Sridhar,
> >>
> >>> From: David Ahern <dsahern@...nel.org>
> >>> Sent: Friday, April 12, 2024 7:36 AM
> >>>
> >>> On 4/11/24 5:03 PM, Samudrala, Sridhar wrote:
> >>>>
> >>>>
> >>>> On 4/10/2024 9:32 PM, Parav Pandit wrote:
> >>>>> Hi Sridhar,
> >>>>>
> >>>>>> From: Samudrala, Sridhar <sridhar.samudrala@...el.com>
> >>>>>> Sent: Thursday, April 11, 2024 4:53 AM
> >>>>>>
> >>>>>>
> >>>>>> On 4/10/2024 6:58 AM, Parav Pandit wrote:
> >>>>>>> Devices send event notifications for the IO queues, such as tx
> >>>>>>> and rx queues, through event queues.
> >>>>>>>
> >>>>>>> Enable a privileged owner, such as a hypervisor PF, to set the
> >>>>>>> number of IO event queues for the VF and SF during the
> >>>>>>> provisioning
> >>> stage.
> >>>>>>
> >>>>>> How do you provision tx/rx queues for VFs & SFs?
> >>>>>> Don't you need similar mechanism to setup max tx/rx queues too?
> >>>>>
> >>>>> Currently we don’t. They are derived from the IO event queues.
> >>>>> As you know, sometimes more txqs than IO event queues needed for
> >>>>> XDP, timestamp, multiple TCs.
> >>>>> If needed, probably additional knob for txq, rxq can be added to
> >>>>> restrict device resources.
> >>>>
> >>>> Rather than deriving tx and rx queues from IO event queues, isn't
> >>>> it more user friendly to do the other way. Let the host admin set
> >>>> the max number of tx and rx queues allowed and the driver derive
> >>>> the number of ioevent queues based on those values. This will be
> >>>> consistent with what ethtool reports as pre-set maximum values for
> >>>> the corresponding
> >>> VF/SF.
> >>>>
> >>>
> >>> I agree with this point: IO EQ seems to be a mlx5 thing (or maybe I
> >>> have not reviewed enough of the other drivers).
> >>
> >> IO EQs are used by hns3, mana, mlx5, mlxsw, be2net. They might not
> >> yet have the need to provision them.
> >>
> >>> Rx and Tx queues are already part of the ethtool API. This devlink
> >>> feature is allowing resource limits to be configured, and a
> >>> consistent API across tools would be better for users.
> >>
> >> IO Eqs of a function are utilized by the non netdev stack as well for
> >> a multi- functionality function like rdma completion vectors.
> >> Txq and rxq are yet another separate resource, so it is not mutually
> >> exclusive with IO EQs.
> >>
> >> I can additionally add txq and rxq provisioning knob too if this is useful,
> yes?
>
> Yes. We need knobs for txq and rxq too.
> IO Eq looks like a completion queue. We don't need them for ice driver at
> this time, but for our idpf based control/switchdev driver we need a way to
> setup max number of txqueues, rxqueues, rxbuffer queues and tx completion
> queues.
>
Understood. Make sense.
> >>
> >> Sridhar,
> >> I didn’t lately check other drivers how usable is it, will you also implement
> >> the txq, rxq callbacks?
> >> Please let me know I can start the work later next week for those
> additional
> >> knobs.
>
> Sure. Our subfunction support for ice is currently under review and we
> are defaulting to 1 rx/tx queue for now. These knobs would be required
> and useful when we enable more than 1 queue for each SF.
>
Got it.
I will start the kernel side patches and CC you for reviews after completing this iproute2 patch.
It would be good if you can help verify on your device.
> >
> > I also forgot to describe in above reply that some driver like mlx5 creates
> internal tx and rxqs not directly visible in channels, for xdp, timestamp, for
> traffic classes, dropping certain packets on rx, etc.
> > So exact derivation of io queues is also hard there >
> > Regardless to me, both knobs are useful, and driver will create min()
> resource based on both the device limits.
> >
Powered by blists - more mailing lists