lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZJTMMhIhvXKfNz/4@dread.disaster.area>
Date:   Fri, 23 Jun 2023 08:33:22 +1000
From:   Dave Chinner <david@...morbit.com>
To:     Hannes Reinecke <hare@...e.de>
Cc:     Pankaj Raghav <p.raghav@...sung.com>, willy@...radead.org,
        gost.dev@...sung.com, mcgrof@...nel.org, hch@....de,
        jwong@...nel.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC 0/4] minimum folio order support in filemap

On Thu, Jun 22, 2023 at 12:23:10PM +0200, Hannes Reinecke wrote:
> On 6/22/23 12:20, Dave Chinner wrote:
> > On Thu, Jun 22, 2023 at 08:50:06AM +0200, Hannes Reinecke wrote:
> > > On 6/22/23 07:51, Hannes Reinecke wrote:
> > > > On 6/22/23 00:07, Dave Chinner wrote:
> > > > > On Wed, Jun 21, 2023 at 11:00:24AM +0200, Hannes Reinecke wrote:
> > > > > > On 6/21/23 10:38, Pankaj Raghav wrote:
> > > > > > Hmm. Most unfortunate; I've just finished my own patchset
> > > > > > (duplicating much
> > > > > > of this work) to get 'brd' running with large folios.
> > > > > > And it even works this time, 'fsx' from the xfstest suite runs
> > > > > > happily on
> > > > > > that.
> > > > > 
> > > > > So you've converted a filesystem to use bs > ps, too? Or is the
> > > > > filesystem that fsx is running on just using normal 4kB block size?
> > > > > If the latter, then fsx is not actually testing the large folio page
> > > > > cache support, it's mostly just doing 4kB aligned IO to brd....
> > > > > 
> > > > I have been running fsx on an xfs with bs=16k, and it worked like a charm.
> > > > I'll try to run the xfstest suite once I'm finished with merging
> > > > Pankajs patches into my patchset.
> > > > Well, would've been too easy.
> > > 'fsx' bails out at test 27 (collapse), with:
> > > 
> > > XFS (ram0): Corruption detected. Unmount and run xfs_repair
> > > XFS (ram0): Internal error isnullstartblock(got.br_startblock) at line 5787
> > > of file fs/xfs/libxfs/xfs_bmap.c.  Caller
> > > xfs_bmap_collapse_extents+0x2d9/0x320 [xfs]
> > > 
> > > Guess some more work needs to be done here.
> > 
> > Yup, start by trying to get the fstests that run fsx through cleanly
> > first. That'll get you through the first 100,000 or so test ops
> > in a few different run configs. Those canned tests are:
> > 
> > tests/generic/075
> > tests/generic/112
> > tests/generic/127
> > tests/generic/231
> > tests/generic/455
> > tests/generic/457
> > 
> THX.
> 
> Any preferences for the filesystem size?
> I'm currently running off two ramdisks with 512M each; if that's too small I
> need to increase the memory of the VM ...

I generally run my pmem/ramdisk VM on a pair of 8GB ramdisks for 4kB
filesystem testing.

Because you are using larger block sizes, you are going to want to
use larger rather than smaller because there are fewer blocks for a
given size, and metadata blocks hold many more records before they
spill to multiple nodes/levels.

e.g. going from 4kB to 16kB needs a 16x larger fs and file sizes for
the 16kB filesystem to exercise the same metadata tree depth
coverage as the 4kB filesystem (i.e. each single block extent is 4x
larger, each single block metadata block holds 4x as much metadata
before it spills).

With this in mind, I'd say you want the 16kB block size ramdisks to
be as large as you can make them when running fstests....

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ