lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aM50q-eujoY7uvwc@fedora>
Date: Sat, 20 Sep 2025 17:32:27 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Caleb Sander Mateos <csander@...estorage.com>
Cc: Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 00/17] ublk: avoid accessing ublk_queue to handle
 ublksrv_io_cmd

On Wed, Sep 17, 2025 at 07:49:36PM -0600, Caleb Sander Mateos wrote:
> For ublk servers with many ublk queues, accessing the ublk_queue in
> ublk_ch_uring_cmd_local() and the functions it calls is a frequent cache miss.
> The ublk_queue is only accessed for its q_depth and flags, which are also
> available on ublk_device. And ublk_device is already accessed for nr_hw_queues,
> so it will already be cached. Unfortunately, the UBLK_IO_NEED_GET_DATA path
> still needs to access the ublk_queue for io_cmd_buf, so it's not possible to
> avoid accessing the ublk_queue there. (Allocating a single io_cmd_buf for all of
> a ublk_device's I/Os could be done in the future.) At least we can optimize
> UBLK_IO_FETCH_REQ, UBLK_IO_COMMIT_AND_FETCH_REQ, UBLK_IO_REGISTER_IO_BUF, and
> UBLK_IO_UNREGISTER_IO_BUF.
> Using only the ublk_device and not the ublk_queue in ublk_dispatch_req() is also
> possible, but left for a future change.

The idea looks good: avoid to read ublk_queue since querying ublk_device is 
inevitable & enough.

For the series,

Reviewed-by: Ming Lei <ming.lei@...hat.com>

BTW, 'const struct ublk_device *' can be passed for several helpers, and it
can be one follow-up.


Thanks,
Ming


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ