lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJaqyWciYsRk2kKc=A51yD49HRH_r3ruscDDVhwwUdAVZU71rA@mail.gmail.com>
Date: Tue, 22 Apr 2025 11:11:59 +0200
From: Eugenio Perez Martin <eperezma@...hat.com>
To: Dragos Tatulea <dtatulea@...dia.com>
Cc: linux-kernel <linux-kernel@...r.kernel.org>, Maxime Coquelin <mcoqueli@...hat.com>, 
	Jason Wang <jasowang@...hat.com>, Michael Tsirkin <mst@...hat.com>, 
	Xie Yongji <xieyongji@...edance.com>
Subject: Re: Merging CVQ handling for vdpa drivers

On Thu, Apr 17, 2025 at 2:36 PM Dragos Tatulea <dtatulea@...dia.com> wrote:
>
> Hi Eugenio,
>
> On Wed, Apr 16, 2025 at 12:58:01PM +0200, Eugenio Perez Martin wrote:
> > Hi!
> >
> > At this moment mlx driver and vdpa_net_sim share some code that
> > handles the CVQ and are not very backend specific. In particular, they
> > share the vringh usage and the ASID code.
> >
> > Now VDUSE could benefit from implementing part of the CVQ in the
> > kernel too. The most obvious example is to avoid the userspace device
> > being able to block the virtio-net driver by not responding to CVQ
> > commands, but all DRY principles apply here too.
> >
> > I propose to abstract it in two steps:
> >
> > 1) Introduce vringh-based CVQ core
> >
> > Let's call it "struct vringh_cvq". It manages CVQ, and sends to the
> > vdpa backend driver just the CVQ commands. No more buffers,
> > notifications, etc handling for the driver.
> >
> > The backend driver can interact with this in many ways, like a
> > function to poll commands.
> I would like to quickly explore this direction as well: how would
> polling work actually here? The entrypoint would still need to be the
> .kick_vq op, right?
>

Right, at least in this first iteration.

> > But I think the best way is for the driver
> > to specify a struct of callbacks per command. This way vringh has its
> > own thread able to run these callbacks, so the backend driver does not
> > need to handle this thread too. If the driver does not specify a
> > particular callback, vringh_cvq returns error to the driver.
> >
> So does this mean that the driver doesn't need to create its own work
> queue anymore? It would only need to implement the handlers? This sounds
> good.
>

Yes, all the CVQ workqueue can be moved to this vringh_cvq.

> How would the .kick_vq happen for the cvq? Would the backend call the
> into this new CVQ mechanism which would read the commands and dispatch
> them to the callbacks? Or is this also abstracted away from the backend?
>

I guess it would be hard to avoid the conditional if
(is_ctrl_vq_idx(mvdev, idx)) in mlx5_vdpa_kick_vq for a first
iteration. But my idea for a second stage is to move all of that to
drivers/vdpa/vdpa.c, a new drivers/vdpa/vdpa_net.c, or similar.

> > Just implementing this first step already has all the intended benefits.
> >
> Could you explain a bit more how ASID handling would happen?

Maybe ASID handling needs to be delayed until the second stage.

All the current net devices handle ASID the same way: By having an
array of iova trees, and checking how each vq group translates to each
ASID. As long as the backend driver reports the number of vqs with
.get_vq_num_max, the frontend knows what is the cvq, how to respond to
.get_vq_group, etc etc.

For all the core vdpa new code, the backend device only has one ASID.
But it would be easy to add devices with many ASID, just make the vdpa
frontend code add one ASID extra and forward the request to the device
if it does not belong to the last ASID / queue / vq group etc.

> It is easy
> to imagine how the CVQ commands would be handled in the callback based
> API.
>
> > 2) Driver-specific CVQ callbacks
> >
> > Move the vringh_cvq struct to the vdpa core (or to a new vdpa net
> > core?), and let the backend driver just register the callback ops.
> >
> > This has less benefits compared with the first step, and it has more
> > effort comparatively. But it helps to move shared logic out of the
> > backend driver making it simpler.
> >
> > Is this plan interesting to you? Does anybody have the time to work on
> > this? Comments are welcome :).
> >
> Interesting: yes. I could help with the callback implementation for mlx5.
>

That would be great, thanks! :).


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ