lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 19 Jan 2023 22:23:28 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     io-uring@...r.kernel.org, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, nbd@...er.debian.org
Cc:     ming.lei@...hat.com
Subject: ublk-nbd: ublk-nbd is avaialbe

Hi,

ublk-nbd[1] is available now.

Basically it is one nbd client, but totally implemented in userspace,
and wrt. current nbd-client in [2], the transmission phase is done
by linux block nbd driver.

The handshake implementation is borrowed from nbd project[2], so
basically ublk-nbd just adds new code for implementing transmission
phase, and it can be thought as moving linux block nbd driver into
userspace.

The added new code is basically in nbd/tgt_nbd.cpp, and io handling
is based on liburing[3], and implemented by c++20 coroutine, so
everything is done in single pthread totally lockless, meantime turns
out it is pretty easy to design & implement, attributed to ublk framework,
c++20 coroutine and liburing.

ublk-nbd supports both tcp and unix socket, and allows to enable io_uring
send zero copy via command line '--send_zc', see details in README[4].

No regression is found in xfstests by using ublk-nbd as both test device
and scratch device, and builtin test(make test T=nbd) runs well.

Fio test("make test T=nbd") shows that ublk-nbd performance is
basically same with nbd-client/nbd driver when running fio on real
ethernet link(1g, 10+g), but ublk-nbd IOPS is higher by ~40% than
nbd-client(nbd driver) with 512K BS, which is because linux nbd
driver sets max_sectors_kb as 64KB at default.

But when running fio over local tcp socket, it is observed in my test
machine that ublk-nbd performs better than nbd-client/nbd driver,
especially with 2 queue/2 jobs, and the gap could be 10% ~ 30%
according to different block size.

Any comments are welcome!

[1] https://github.com/ming1/ubdsrv/blob/master/nbd
[2] https://github.com/NetworkBlockDevice/nbd
[3] https://github.com/axboe/liburing
[4] https://github.com/ming1/ubdsrv/blob/master/nbd/README.rst

Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ