lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <shmvjznci4e7m3oeqzqrhmvsljzatafwpnauyyrlpv4xcntljo@w4ip2b3m34w4>
Date: Thu, 17 Apr 2025 12:35:49 +0000
From: Dragos Tatulea <dtatulea@...dia.com>
To: Eugenio Perez Martin <eperezma@...hat.com>, 
	linux-kernel <linux-kernel@...r.kernel.org>, Maxime Coquelin <mcoqueli@...hat.com>
Cc: Jason Wang <jasowang@...hat.com>, Michael Tsirkin <mst@...hat.com>, 
	Xie Yongji <xieyongji@...edance.com>
Subject: Re: Merging CVQ handling for vdpa drivers

Hi Eugenio,

On Wed, Apr 16, 2025 at 12:58:01PM +0200, Eugenio Perez Martin wrote:
> Hi!
> 
> At this moment mlx driver and vdpa_net_sim share some code that
> handles the CVQ and are not very backend specific. In particular, they
> share the vringh usage and the ASID code.
> 
> Now VDUSE could benefit from implementing part of the CVQ in the
> kernel too. The most obvious example is to avoid the userspace device
> being able to block the virtio-net driver by not responding to CVQ
> commands, but all DRY principles apply here too.
> 
> I propose to abstract it in two steps:
> 
> 1) Introduce vringh-based CVQ core
> 
> Let's call it "struct vringh_cvq". It manages CVQ, and sends to the
> vdpa backend driver just the CVQ commands. No more buffers,
> notifications, etc handling for the driver.
>
> The backend driver can interact with this in many ways, like a
> function to poll commands.
I would like to quickly explore this direction as well: how would
polling work actually here? The entrypoint would still need to be the
.kick_vq op, right?

> But I think the best way is for the driver
> to specify a struct of callbacks per command. This way vringh has its
> own thread able to run these callbacks, so the backend driver does not
> need to handle this thread too. If the driver does not specify a
> particular callback, vringh_cvq returns error to the driver.
> 
So does this mean that the driver doesn't need to create its own work
queue anymore? It would only need to implement the handlers? This sounds
good.

How would the .kick_vq happen for the cvq? Would the backend call the
into this new CVQ mechanism which would read the commands and dispatch
them to the callbacks? Or is this also abstracted away from the backend?

> Just implementing this first step already has all the intended benefits.
> 
Could you explain a bit more how ASID handling would happen? It is easy
to imagine how the CVQ commands would be handled in the callback based
API.

> 2) Driver-specific CVQ callbacks
> 
> Move the vringh_cvq struct to the vdpa core (or to a new vdpa net
> core?), and let the backend driver just register the callback ops.
> 
> This has less benefits compared with the first step, and it has more
> effort comparatively. But it helps to move shared logic out of the
> backend driver making it simpler.
> 
> Is this plan interesting to you? Does anybody have the time to work on
> this? Comments are welcome :).
> 
Interesting: yes. I could help with the callback implementation for mlx5.

Thanks,
Dragos

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ