lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 8 Sep 2022 11:01:37 +0200
From:   Jan Kara <jack@...e.cz>
To:     "Ritesh Harjani (IBM)" <ritesh.list@...il.com>
Cc:     Jan Kara <jack@...e.cz>, Ted Tso <tytso@....edu>,
        linux-ext4@...r.kernel.org,
        Thorsten Leemhuis <regressions@...mhuis.info>,
        Ojaswin Mujoo <ojaswin@...ux.ibm.com>,
        Stefan Wahren <stefan.wahren@...e.com>,
        Andreas Dilger <adilger.kernel@...ger.ca>
Subject: Re: [PATCH 5/5] ext4: Use buckets for cr 1 block scan instead of
 rbtree

On Thu 08-09-22 00:11:10, Ritesh Harjani (IBM) wrote:
> On 22/09/06 05:29PM, Jan Kara wrote:
> > Using rbtree for sorting groups by average fragment size is relatively
> > expensive (needs rbtree update on every block freeing or allocation) and
> > leads to wide spreading of allocations because selection of block group
> > is very sentitive both to changes in free space and amount of blocks
> > allocated. Furthermore selecting group with the best matching average
> > fragment size is not necessary anyway, even more so because the
> > variability of fragment sizes within a group is likely large so average
> > is not telling much. We just need a group with large enough average
> > fragment size so that we have high probability of finding large enough
> > free extent and we don't want average fragment size to be too big so
> > that we are likely to find free extent only somewhat larger than what we
> > need.
> > 
> > So instead of maintaing rbtree of groups sorted by fragment size keep
> > bins (lists) or groups where average fragment size is in the interval
> > [2^i, 2^(i+1)). This structure requires less updates on block allocation
> > / freeing, generally avoids chaotic spreading of allocations into block
> > groups, and still is able to quickly (even faster that the rbtree)
> > provide a block group which is likely to have a suitably sized free
> > space extent.
> 
> This makes sense because we anyways maintain buddy bitmap for MB_NUM_ORDERS
> bitmaps. Hence our data structure to maintain different lists of groups, with 
> their average fragments size can be bounded within MB_NUM_ORDERS lists.
> This also makes it for amortized O(1) search time for finding the right group
> in CR1 search.
> 
> > 
> > This patch reduces number of block groups used when untarring archive
> > with medium sized files (size somewhat above 64k which is default
> > mballoc limit for avoiding locality group preallocation) to about half
> > and thus improves write speeds for eMMC flash significantly.
> > 
> 
> Indeed a nice change. More inline with the how we maintain
> sbi->s_mb_largest_free_orders lists.

I didn't really find more comments than the one below?

> I think as you already noted there are few minor checkpatch errors,
> other than that one small query below.

Yep, some checkpatch errors + procfs file handling bugs + one bad unlock in
an error recovery path. All fixed up locally :)

> > -/*
> > - * Reinsert grpinfo into the avg_fragment_size tree with new average
> > - * fragment size.
> > - */
> > +/* Move group to appropriate avg_fragment_size list */
> >  static void
> >  mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info *grp)
> >  {
> >  	struct ext4_sb_info *sbi = EXT4_SB(sb);
> > +	int new_order;
> >  
> >  	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_free == 0)
> >  		return;
> >  
> > -	write_lock(&sbi->s_mb_rb_lock);
> > -	if (!RB_EMPTY_NODE(&grp->bb_avg_fragment_size_rb)) {
> > -		rb_erase(&grp->bb_avg_fragment_size_rb,
> > -				&sbi->s_mb_avg_fragment_size_root);
> > -		RB_CLEAR_NODE(&grp->bb_avg_fragment_size_rb);
> > -	}
> > +	new_order = mb_avg_fragment_size_order(sb,
> > +					grp->bb_free / grp->bb_fragments);
> 
> Previous rbtree change was always checking for if grp->bb_fragments for 0.
> Can grp->bb_fragments be 0 here?

Since grp->bb_free is greater than zero, there should be at least one
fragment...

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ