lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240531150016.GL52987@frogsfrogsfrogs>
Date: Fri, 31 May 2024 08:00:16 -0700
From: "Darrick J. Wong" <djwong@...nel.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: Zhang Yi <yi.zhang@...weicloud.com>, linux-xfs@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	brauner@...nel.org, david@...morbit.com, chandanbabu@...nel.org,
	jack@...e.cz, willy@...radead.org, yi.zhang@...wei.com,
	chengzhihao1@...wei.com, yukuai3@...wei.com
Subject: Re: [RFC PATCH v4 8/8] xfs: improve truncate on a realtime inode
 with huge extsize

On Fri, May 31, 2024 at 07:15:34AM -0700, Christoph Hellwig wrote:
> On Fri, May 31, 2024 at 07:12:10AM -0700, Darrick J. Wong wrote:
> > There are <cough> some users that want 1G extents.
> > 
> > For the rest of us who don't live in the stratosphere, it's convenient
> > for fsdax to have rt extents that match the PMD size, which could be
> > large on arm64 (e.g. 512M, or two smr sectors).
> 
> That's fine.  Maybe to rephrase my question.  With this series we
> have 3 different truncate path:
> 
>  1) unmap all blocks (!rt || rtextsizse == 1)
>  2) zero leftover blocks in an rtextent (small rtextsize, but > 1)
>  3) converted leftover block in an rtextent to unwritten (large
>    rtextsize)
> 
> What is the right threshold to switch between 2 and 3?  And do we
> really need 2) at all?

I don't think we need (2) at all.

There's likely some threshold below where it's a wash -- compare with
ext4 strategy of trying to write 64k chunks even if that requires
zeroing pagecache to cut down on fragmentation on hdds -- but I don't
know if we care anymore. ;)

--D

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ