lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJaqyWfcY0Hi=B9rPAqAfkJoXBgf0jYm_dUXrRX=sZ4XRCxjOw@mail.gmail.com>
Date: Wed, 16 Apr 2025 12:58:01 +0200
From: Eugenio Perez Martin <eperezma@...hat.com>
To: linux-kernel <linux-kernel@...r.kernel.org>, Maxime Coquelin <mcoqueli@...hat.com>, 
	Dragos Tatulea DE <dtatulea@...dia.com>
Cc: Jason Wang <jasowang@...hat.com>, Michael Tsirkin <mst@...hat.com>, 
	Xie Yongji <xieyongji@...edance.com>
Subject: Merging CVQ handling for vdpa drivers

Hi!

At this moment mlx driver and vdpa_net_sim share some code that
handles the CVQ and are not very backend specific. In particular, they
share the vringh usage and the ASID code.

Now VDUSE could benefit from implementing part of the CVQ in the
kernel too. The most obvious example is to avoid the userspace device
being able to block the virtio-net driver by not responding to CVQ
commands, but all DRY principles apply here too.

I propose to abstract it in two steps:

1) Introduce vringh-based CVQ core

Let's call it "struct vringh_cvq". It manages CVQ, and sends to the
vdpa backend driver just the CVQ commands. No more buffers,
notifications, etc handling for the driver.

The backend driver can interact with this in many ways, like a
function to poll commands. But I think the best way is for the driver
to specify a struct of callbacks per command. This way vringh has its
own thread able to run these callbacks, so the backend driver does not
need to handle this thread too. If the driver does not specify a
particular callback, vringh_cvq returns error to the driver.

Just implementing this first step already has all the intended benefits.

2) Driver-specific CVQ callbacks

Move the vringh_cvq struct to the vdpa core (or to a new vdpa net
core?), and let the backend driver just register the callback ops.

This has less benefits compared with the first step, and it has more
effort comparatively. But it helps to move shared logic out of the
backend driver making it simpler.

Is this plan interesting to you? Does anybody have the time to work on
this? Comments are welcome :).

Thanks!


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ