lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250811120819.1022017-1-alexjlzheng@tencent.com>
Date: Mon, 11 Aug 2025 20:08:19 +0800
From: Jinliang Zheng <alexjlzheng@...il.com>
To: hch@...radead.org
Cc: alexjlzheng@...il.com,
	alexjlzheng@...cent.com,
	brauner@...nel.org,
	djwong@...nel.org,
	linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	linux-xfs@...r.kernel.org
Subject: Re: [PATCH v2 0/4] iomap: allow partial folio write with iomap_folio_state

On Mon, 11 Aug 2025 03:38:17 -0700, Christoph Hellwig wrote:
> On Sun, Aug 10, 2025 at 06:15:50PM +0800, alexjlzheng@...il.com wrote:
> > From: Jinliang Zheng <alexjlzheng@...cent.com>
> > 
> > With iomap_folio_state, we can identify uptodate states at the block
> > level, and a read_folio reading can correctly handle partially
> > uptodate folios.
> > 
> > Therefore, when a partial write occurs, accept the block-aligned
> > partial write instead of rejecting the entire write.
>

Thank you for your reply. :)
 
> We're not rejecting the entire write, but instead moving on to the
> next loop iteration.

Yes, but the next iteration will need to re-copy from the beginning,
which means that all copies in this iteration are useless. The purpose
of this patch set is to reduce the number of bytes that need to be
re-copied and reduce the number of discarded copies.

For example, suppose a folio is 2MB, blocksize is 4kB, and the copied
bytes are 2MB-3kB.

Without this patchset, we'd need to recopy 2MB-3kB of bytes in the next
iteration.

 |<-------------------- 2MB -------------------->|
 +-------+-------+-------+-------+-------+-------+
 | block |  ...  | block | block |  ...  | block | folio
 +-------+-------+-------+-------+-------+-------+
 |<-4kB->|

 |<--------------- copied 2MB-3kB --------->|       first time copied
 |<-------- 1MB -------->|                          next time we need copy (chunk /= 2)
                         |<-------- 1MB -------->|  next next time we need copy.

 |<------ 2MB-3kB bytes duplicate copy ---->|

With this patchset, we can accept 2MB-4kB of bytes, which is block-aligned.
This means we only need to process the remaining 4kB in the next iteration.

 |<-------------------- 2MB -------------------->|
 +-------+-------+-------+-------+-------+-------+
 | block |  ...  | block | block |  ...  | block | folio
 +-------+-------+-------+-------+-------+-------+
 |<-4kB->|

 |<--------------- copied 2MB-3kB --------->|       first time copied
                                         |<-4kB->|  next time we need copy

                                         |<>|
                              only 1kB bytes duplicate copy


> 
> > This patchset has been tested by xfstests' generic and xfs group, and
> > there's no new failed cases compared to the lastest upstream version kernel.
> 
> What is the motivation for this series?  Do you see performance
> improvements in a workload you care about?

Paritial writes are inherently a relatively unusual situation and don't account
for a significant portion of performance testing.

However, in scenarios with numerous memory errors, they can significantly reduce
the number of bytes copied.

thanks,
Jinliang Zheng :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ