lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Tue, 24 Oct 2023 14:45:59 +0700
From:   Bagas Sanjaya <bagasdotme@...il.com>
To:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux NVMe <linux-nvme@...ts.infradead.org>,
        Linux Block Devices <linux-block@...r.kernel.org>
Cc:     Christoph Hellwig <hch@....de>, Yu Kuai <yukuai3@...wei.com>,
        Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>,
        Sagi Grimberg <sagi@...mberg.me>, michallinuxstuff@...il.com
Subject: Fwd: queue/scheduler missing under nvmf block device

Hi,

I notice a bug report on Bugzilla [1]. Quoting from it:

> Noticed that under 6.5.6 (Fedora build, 6.5.6-100.fc37.x86_64) the queue/scheduler attr is not visible under namespace block device connected over nvme-fabrics. 
> 
> # readlink -f /sys/block/nvme0n1
> /sys/devices/virtual/nvme-subsystem/nvme-subsys0/nvme0n1
> # grep . /sys/devices/virtual/nvme-subsystem/nvme-subsys0/*/transport
> /sys/devices/virtual/nvme-subsystem/nvme-subsys0/nvme0/transport:rdma
> /sys/devices/virtual/nvme-subsystem/nvme-subsys0/nvme1/transport:rdma
> # [[ -e /sys/block/nvme0n1/queue/scheduler ]] || echo oops
> oops
> 
> What's a bit confusing is that each of the ctrls attached to this subsystem also expose nvme*c*n1 device. These are marked as hidden under sysfs, hence not available as an actual block device (i.e. not present under /dev/). That said, these devices actually do have queue/scheduler attr available under sysfs.
> 
> # readlink -f /sys/block/nvme0*c*
> /sys/devices/virtual/nvme-fabrics/ctl/nvme0/nvme0c0n1
> /sys/devices/virtual/nvme-fabrics/ctl/nvme1/nvme0c1n1
> # readlink -f  /sys/block/nvme0*c*/queue/scheduler
> /sys/devices/virtual/nvme-fabrics/ctl/nvme0/nvme0c0n1/queue/scheduler
> /sys/devices/virtual/nvme-fabrics/ctl/nvme1/nvme0c1n1/queue/scheduler
> # grep . /sys/block/nvme0*c*/queue/scheduler
> /sys/block/nvme0c0n1/queue/scheduler:[none] mq-deadline kyber bfq
> /sys/block/nvme0c1n1/queue/scheduler:[none] mq-deadline kyber bfq
> 
> 
> I have a little test infra which normally, after the nvmef got connected, would take the namespace device, set some sysfs attributes to specific values (that would include queue/scheduler) and then execute fio, targeting this namespace device.
> 
> The only clue I got is this https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6d85ebf95c44e, but then again I am not sure what to make of it. Initially, my thought was "ok, queue/scheduler is gone, so just don't try to touch it". But if the c*n* devices still do have this attribute available, are they meant to be used instead of the actual namespace device, to tweak these specific sysfs attributes?
> 
> The problem here is that I have two c*n* devices but only single block device (multipath setup). Would that mean that changing either of those devices' attributes would affect the actual namespace device? Or is each path independent here?
> 
> Any hints would be appreciated. :)

See Bugzilla for the full thread.

Thanks.

[1]: https://bugzilla.kernel.org/show_bug.cgi?id=218042

-- 
An old man doll... just what I always wanted! - Clara

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ