lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220908092314.j6o2szika2r6agal@riteshh-domain>
Date:   Thu, 8 Sep 2022 14:53:14 +0530
From:   "Ritesh Harjani (IBM)" <ritesh.list@...il.com>
To:     Jan Kara <jack@...e.cz>
Cc:     Ted Tso <tytso@....edu>, linux-ext4@...r.kernel.org,
        Thorsten Leemhuis <regressions@...mhuis.info>,
        Ojaswin Mujoo <ojaswin@...ux.ibm.com>,
        Stefan Wahren <stefan.wahren@...e.com>,
        Andreas Dilger <adilger.kernel@...ger.ca>
Subject: Re: [PATCH 5/5] ext4: Use buckets for cr 1 block scan instead of
 rbtree

On 22/09/08 11:01AM, Jan Kara wrote:
> On Thu 08-09-22 00:11:10, Ritesh Harjani (IBM) wrote:
> > On 22/09/06 05:29PM, Jan Kara wrote:
> > > Using rbtree for sorting groups by average fragment size is relatively
> > > expensive (needs rbtree update on every block freeing or allocation) and
> > > leads to wide spreading of allocations because selection of block group
> > > is very sentitive both to changes in free space and amount of blocks
> > > allocated. Furthermore selecting group with the best matching average
> > > fragment size is not necessary anyway, even more so because the
> > > variability of fragment sizes within a group is likely large so average
> > > is not telling much. We just need a group with large enough average
> > > fragment size so that we have high probability of finding large enough
> > > free extent and we don't want average fragment size to be too big so
> > > that we are likely to find free extent only somewhat larger than what we
> > > need.
> > > 
> > > So instead of maintaing rbtree of groups sorted by fragment size keep
> > > bins (lists) or groups where average fragment size is in the interval
> > > [2^i, 2^(i+1)). This structure requires less updates on block allocation
> > > / freeing, generally avoids chaotic spreading of allocations into block
> > > groups, and still is able to quickly (even faster that the rbtree)
> > > provide a block group which is likely to have a suitably sized free
> > > space extent.
> > 
> > This makes sense because we anyways maintain buddy bitmap for MB_NUM_ORDERS
> > bitmaps. Hence our data structure to maintain different lists of groups, with 
> > their average fragments size can be bounded within MB_NUM_ORDERS lists.
> > This also makes it for amortized O(1) search time for finding the right group
> > in CR1 search.
> > 
> > > 
> > > This patch reduces number of block groups used when untarring archive
> > > with medium sized files (size somewhat above 64k which is default
> > > mballoc limit for avoiding locality group preallocation) to about half
> > > and thus improves write speeds for eMMC flash significantly.
> > > 
> > 
> > Indeed a nice change. More inline with the how we maintain
> > sbi->s_mb_largest_free_orders lists.
> 
> I didn't really find more comments than the one below?

No I meant. The data structure is more inline with sbi->s_mb_largest_free_orders
lists :) Had no other comments. 

> 
> > I think as you already noted there are few minor checkpatch errors,
> > other than that one small query below.
> 
> Yep, some checkpatch errors + procfs file handling bugs + one bad unlock in
> an error recovery path. All fixed up locally :)

Sure.

> 
> > > -/*
> > > - * Reinsert grpinfo into the avg_fragment_size tree with new average
> > > - * fragment size.
> > > - */
> > > +/* Move group to appropriate avg_fragment_size list */
> > >  static void
> > >  mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info *grp)
> > >  {
> > >  	struct ext4_sb_info *sbi = EXT4_SB(sb);
> > > +	int new_order;
> > >  
> > >  	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_free == 0)
> > >  		return;
> > >  
> > > -	write_lock(&sbi->s_mb_rb_lock);
> > > -	if (!RB_EMPTY_NODE(&grp->bb_avg_fragment_size_rb)) {
> > > -		rb_erase(&grp->bb_avg_fragment_size_rb,
> > > -				&sbi->s_mb_avg_fragment_size_root);
> > > -		RB_CLEAR_NODE(&grp->bb_avg_fragment_size_rb);
> > > -	}
> > > +	new_order = mb_avg_fragment_size_order(sb,
> > > +					grp->bb_free / grp->bb_fragments);
> > 
> > Previous rbtree change was always checking for if grp->bb_fragments for 0.
> > Can grp->bb_fragments be 0 here?
> 
> Since grp->bb_free is greater than zero, there should be at least one
> fragment...

aah yes, right.

-ritesh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ