lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4701065a.4aa9.19b346e492d.Coremail.15927021679@163.com>
Date: Fri, 19 Dec 2025 10:26:41 +0800 (CST)
From: 熊伟民   <15927021679@....com>
To: "Leon Romanovsky" <leon@...nel.org>
Cc: "Alexei Starovoitov" <ast@...nel.org>,
	"Daniel Borkmann" <daniel@...earbox.net>,
	"David S . Miller" <davem@...emloft.net>,
	"Jakub Kicinski" <kuba@...nel.org>,
	"Jesper Dangaard Brouer" <hawk@...nel.org>,
	"John Fastabend" <john.fastabend@...il.com>,
	"Stanislav Fomichev" <sdf@...ichev.me>, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re:Re: Implement initial driver for virtio-RDMA devices(kernel),
 virtio-rdma device model(qemu) and vhost-user-RDMA backend device(dpdk)

At 2025-12-19 00:20:28, "Leon Romanovsky" <leon@...nel.org> wrote:
>On Wed, Dec 17, 2025 at 04:49:47PM +0800, Xiong Weimin wrote:
>> Hi all,
>> 
>> This testing instructions aims to introduce an emulating a soft ROCE 
>> device with normal NIC(no RDMA).
>
>What is it? We already have one soft RoCE device implemented in the
>kernel (drivers/infiniband/sw/rxe), which doesn't require any QEMU
>changes at all.
>

>Thanks
>
The framwork of vhost_user_rdma(dpdk)/virtio-rdma driver(kernel) is actually 
a userspace RDMA backend optimized for virtualization, while rxe (Soft-RoCE) 
is a kernel-based software RDMA implementation. Key advantages include:
1. Zero-Copy Architecture: vhost_user_rdma uses shared memory between VMs and 
host processes, eliminating data copies.rxe requires kernel-mediated data copies, 
adding latency.


2. Polling Mode: Avoids VM-Exit interrupts by using busy-wait polling, reducing 
CPU context switches.


3. QEMU/KVM Native Support: vhost_user_rdma integrates directly with hypervisors 
via vhost-user protocol.rxe requires PCI device passthrough ( e.g., VFIO), 
complicating deployment.


4. Features Support: vhost_user_rdma enables live migration, multi-queue virtio, 
and NUMA-aware I/O processing.


5. Userspace Processing: Operates entirely in userspace ( e.g., with SPDK), bypassing
the kernel network stack. rxe relies on the Linux kernel network stack, consuming more 
CPU resources.


6. Resource Efficiency: Achieves lower latency in benchmarks for VM-to-VM communication. 


7. vhost-user Backend: DPDK provides a vhost-user library that implements the vhost-user 
protocol in userspace. This library enables efficient communication between the hypervisor 
(QEMU) and the userspace networking stack (like a DPDK-based application). For RDMA, this 
means that the vhost-user backend can directly handle RDMA operations without going through 
the kernel.


Thanks 







Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ