[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRxa1wPMsJwBubjx@pc636>
Date: Tue, 18 Nov 2025 12:39:03 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: Mikulas Patocka <mpatocka@...hat.com>
Cc: "Uladzislau Rezki (Sony)" <urezki@...il.com>,
Alasdair Kergon <agk@...hat.com>, DMML <dm-devel@...ts.linux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Snitzer <snitzer@...hat.com>, Christoph Hellwig <hch@....de>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RESEND PATCH] dm-ebs: Mark full buffer dirty even on partial
write
Hello, Mikulas!
> Hi
>
> What is the logical_block_size of the underlying nvme device? - i.e.
> what's the content of this file
> /sys/block/nvme0n1/queue/logical_block_size in the virtual machine?
>
It is 512. Whereas a physical is bigger, i.e. my device can not perform
I/O by 512 granularity.
As for virtual machine, i just simulated the problem so people can set
it up and check. The commit message describes how it can be reproduced.
The dm-ebs target which i setup does ebs to ubs conversion, so the NVME
driver gets BIOs are in size and aligned to ubs size. The ubs size
corresponds to the underlying physical device I/O size.
So your patch does not work if logical < physical. Therefore it does
not help my project.
--
Uladzislau Rezki
Powered by blists - more mailing lists