lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRxpPFQmbB0wnmM7@milan>
Date: Tue, 18 Nov 2025 13:40:28 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: Mikulas Patocka <mpatocka@...hat.com>
Cc: Uladzislau Rezki <urezki@...il.com>, Alasdair Kergon <agk@...hat.com>,
	DMML <dm-devel@...ts.linux.dev>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mike Snitzer <snitzer@...hat.com>, Christoph Hellwig <hch@....de>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RESEND PATCH] dm-ebs: Mark full buffer dirty even on partial
 write

On Tue, Nov 18, 2025 at 01:00:36PM +0100, Mikulas Patocka wrote:
> 
> 
> On Tue, 18 Nov 2025, Uladzislau Rezki wrote:
> 
> > Hello, Mikulas!
> > 
> > > Hi
> > > 
> > > What is the logical_block_size of the underlying nvme device? - i.e. 
> > > what's the content of this file 
> > > /sys/block/nvme0n1/queue/logical_block_size in the virtual machine?
> > > 
> > It is 512. Whereas a physical is bigger, i.e. my device can not perform
> > I/O by 512 granularity.
> 
> And what is physical block size? Is it 8192?
> 
Bigger then logical.

> > As for virtual machine, i just simulated the problem so people can set
> > it up and check. The commit message describes how it can be reproduced.
> > 
> > The dm-ebs target which i setup does ebs to ubs conversion, so the NVME
> > driver gets BIOs are in size and aligned to ubs size. The ubs size
> > corresponds to the underlying physical device I/O size.
> > 
> > So your patch does not work if logical < physical. Therefore it does
> > not help my project.
> 
> Logical block size is the granularity at which the device can accept I/O. 
> Physical block size is the block size on the medium.
> 
> If logical < physical, then the device performs read-modify-write cycle 
> when writing blocks that are not aligned at physical block size.
> 
This is not true. It depends on your device and specification. If it
can't there is the dm-ebs that does the job.

> So, your setup is broken, because it advertises logical block size 512, 
> but it is not able to perform I/O at this granularity.
> 
I posted the workflow how to reproduce the problem. See the commit
messages. But as i noted it is for people so they can simulate it.
 
But in my case, real one, logical < pysical.

> There is this piece of code in include/linux/blkdev.h:
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> /*
>  * We should strive for 1 << (PAGE_SHIFT + MAX_PAGECACHE_ORDER)
>  * however we constrain this to what we can validate and test.
>  */
> #define BLK_MAX_BLOCK_SIZE      SZ_64K
:wq
> #else
> #define BLK_MAX_BLOCK_SIZE      PAGE_SIZE
> #endif
> 
> /* blk_validate_limits() validates bsize, so drivers don't usually need to */
> static inline int blk_validate_block_size(unsigned long bsize)
> {
>         if (bsize < 512 || bsize > BLK_MAX_BLOCK_SIZE || !is_power_of_2(bsize))
>                 return -EINVAL;
> 
>         return 0;
> }
> 
> What happens when you define CONFIG_TRANSPARENT_HUGEPAGE in your .config? 
> Does it fix the problem with small logical block size for you?
> 
TRANSPARENT stuff allows you to work with PS < BS. I have it enabled in
my case.
 
Just to repeat, the device can not do I/O with logical bs only physical.

--
Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ