lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 25 Mar 2023 23:54:02 -0400
From:   "Theodore Ts'o" <tytso@....edu>
To:     Ojaswin Mujoo <ojaswin@...ux.ibm.com>
Cc:     Jan Kara <jack@...e.cz>, linux-ext4@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        Ritesh Harjani <ritesh.list@...il.com>,
        Andreas Dilger <adilger@...ger.ca>
Subject: Re: [RFC 08/11] ext4: Don't skip prefetching BLOCK_UNINIT groups

On Fri, Mar 17, 2023 at 04:25:04PM +0530, Ojaswin Mujoo wrote:
> > > This improves the accuracy of CR0/1 allocation as earlier, we could have
> > > essentially empty BLOCK_UNINIT groups being ignored by CR0/1 due to their buddy
> > > not being initialized, leading to slower CR2 allocations. With this patch CR0/1
> > > will be able to discover these groups as well, thus improving performance.
> >
> > The patch looks good. I just somewhat wonder - this change may result in
> > uninitialized groups being initialized and used earlier (previously we'd
> > rather search in other already initialized groups) which may spread
> > allocations more. But I suppose that's fine and uninit groups are not
> > really a feature meant to limit fragmentation and as the filesystem ages
> > the differences should be minimal. So feel free to add:
> 
> Another point I wanted to discuss wrt this patch series was why were the
> BLOCK_UNINIT groups not being prefetched earlier. One point I can think
> of is that this might lead to memory pressure when we have too many
> empty BGs in a very large (say terabytes) disk.

Originally the prefetch logic was simply something to optimize I/O ---
that is, normally, all of the block bitmaps for a flex_bg are
contiguous, so why not just read them all in a single I/O which is
issued all at once, instead of doing them as separate 4k reads.

Skipping block groups that hadn't yet been prefetched was something
which was added later, in order to improve performance of the
allocator for freshly mounted file systems where the prefetch hadn't
yet had a chance to pull in block bitmaps; the problem was that if the
block groups hadn't been prefetch yet, then the cr0 scan would fetch
them, and if you have a storage device where blocks with monotonically
increasing LBA numbers aren't necessarily stored adjacently on disk
(for example, on a dm-thin volume, but if one were to do an experiment
on certain emulated block devices on certain hyperscalar cloud
environments, one might find a similar performance profile), resulting
in a cr0 scan potentially issuing a series of 16 sequential 4k I/O's,
that could be substantially worse from a performance standpoint than
doing a single squential 64k I/O.

When this change was made, the focus was on *initialized* bitmaps
taking a long time if they were issued as individual sequential 4k
I/O's; the fix was to skip scanning them initially, since the hope was
that the prefetch would pull them in fairly quickly, and a few bad
allocations when the file system was freshly mounted was an acceptable
tradeoff.

But prefetching prefetching BLOCK_UNINIT groups makes sense, that
should fix the problem that you've identified (at least for
BLOCK_UNINIT groups; for initialized block bitmaps, we'll still have
less optimal allocation patterns until we've managed to prefetch those
block groups).

Cheers,

					0 Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ