[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251121072421.GA29754@lst.de>
Date: Fri, 21 Nov 2025 08:24:21 +0100
From: Christoph Hellwig <hch@....de>
To: Uladzislau Rezki <urezki@...il.com>
Cc: Christoph Hellwig <hch@....de>, Mikulas Patocka <mpatocka@...hat.com>,
Benjamin Marzinski <bmarzins@...hat.com>,
Alasdair Kergon <agk@...hat.com>, DMML <dm-devel@...ts.linux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Snitzer <snitzer@...hat.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RESEND PATCH] dm-ebs: Mark full buffer dirty even on partial
write
On Thu, Nov 20, 2025 at 01:08:57PM +0100, Uladzislau Rezki wrote:
> Could you please check below? Is the last one is correctly reported?
The latter looks unexpected, but is is becase qemu is not passing through
the qemu physical_block_size attribute to any of the nvme settings Linux
interprets as such for NVMe (NVMe doesn't actually have the concept of
a physical block size, unlike SCSI/ATA):
root@...tvm:~# nvme id-ns -H /dev/nvme0n1 | grep npw
npwg : 0
npwa : 0
root@...tvm:~# nvme id-ns -H /dev/nvme0n1 | grep naw
nawun : 0
nawupf : 0
root@...tvm:~# nvme id-ctrl -H /dev/nvme0 | grep awupf
awupf : 0
but as said multiple times, that should not really matter - the logical
block size is the granularity of I/O, the physical block size is just
a performance hint.
Powered by blists - more mailing lists