[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPFOzZurP23oCENeP57f7Kj-4uCf9bN9ERZQTbdZJh_d5rUEwg@mail.gmail.com>
Date: Mon, 31 Jul 2023 20:52:06 +0800
From: Fengnan Chang <changfengnan@...edance.com>
To: Guoqing Jiang <guoqing.jiang@...ux.dev>
Cc: adilger.kernel@...ger.ca, tytso@....edu,
linux-ext4@...r.kernel.org,
kernel test robot <oliver.sang@...el.com>
Subject: Re: [External] Re: [PATCH v3] ext4: improve trim efficiency
Hi Ted, Andreas:
Any comments ?
Thanks.
Guoqing Jiang <guoqing.jiang@...ux.dev> 于2023年7月31日周一 10:27写道:
>
>
>
> On 7/25/23 20:18, Fengnan Chang wrote:
> > In commit a015434480dc("ext4: send parallel discards on commit
> > completions"), issue all discard commands in parallel make all
> > bios could merged into one request, so lowlevel drive can issue
> > multi segments in one time which is more efficiency, but commit
> > 55cdd0af2bc5 ("ext4: get discard out of jbd2 commit kthread contex")
> > seems broke this way, let's fix it.
> > In my test:
> > 1. create 10 normal files, each file size is 10G.
> > 2. deallocate file, punch a 16k holes every 32k.
> > 3. trim all fs.
> >
> > the time of fstrim fs reduce from 6.7s to 1.3s.
> >
> > Reported-by: kernel test robot <oliver.sang@...el.com>
> > Closes: https://lore.kernel.org/oe-lkp/202307171455.ee68ef8b-oliver.sang@intel.com
> > Signed-off-by: Fengnan Chang <changfengnan@...edance.com>
> > ---
> > fs/ext4/mballoc.c | 49 +++++++++++++++++++++++++++++++++++++++++------
> > 1 file changed, 43 insertions(+), 6 deletions(-)
> >
> > diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
> > index a2475b8c9fb5..b75ca1df0d30 100644
> > --- a/fs/ext4/mballoc.c
> > +++ b/fs/ext4/mballoc.c
> > @@ -6790,7 +6790,8 @@ int ext4_group_add_blocks(handle_t *handle, struct super_block *sb,
> > * be called with under the group lock.
> > */
> > static int ext4_trim_extent(struct super_block *sb,
> > - int start, int count, struct ext4_buddy *e4b)
> > + int start, int count, bool noalloc, struct ext4_buddy *e4b,
> > + struct bio **biop, struct ext4_free_data **entryp)
> > __releases(bitlock)
> > __acquires(bitlock)
> > {
> > @@ -6812,9 +6813,16 @@ __acquires(bitlock)
> > */
> > mb_mark_used(e4b, &ex);
> > ext4_unlock_group(sb, group);
> > - ret = ext4_issue_discard(sb, group, start, count, NULL);
> > + ret = ext4_issue_discard(sb, group, start, count, biop);
> > + if (!ret && !noalloc) {
> > + struct ext4_free_data *entry = kmem_cache_alloc(ext4_free_data_cachep,
> > + GFP_NOFS|__GFP_NOFAIL);
> > + entry->efd_start_cluster = start;
> > + entry->efd_count = count;
> > + *entryp = entry;
> > + }
> > +
> > ext4_lock_group(sb, group);
> > - mb_free_blocks(NULL, e4b, start, ex.fe_len);
> > return ret;
> > }
> >
> > @@ -6824,26 +6832,40 @@ static int ext4_try_to_trim_range(struct super_block *sb,
> > __acquires(ext4_group_lock_ptr(sb, e4b->bd_group))
> > __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
> > {
> > - ext4_grpblk_t next, count, free_count;
> > + ext4_grpblk_t next, count, free_count, bak;
> > void *bitmap;
> > + struct ext4_free_data *entry = NULL, *fd, *nfd;
> > + struct list_head discard_data_list;
> > + struct bio *discard_bio = NULL;
> > + struct blk_plug plug;
> > + bool noalloc = false;
> > +
> > + INIT_LIST_HEAD(&discard_data_list);
> >
> > bitmap = e4b->bd_bitmap;
> > start = (e4b->bd_info->bb_first_free > start) ?
> > e4b->bd_info->bb_first_free : start;
> > count = 0;
> > free_count = 0;
> > + bak = start;
> >
> > + blk_start_plug(&plug);
> > while (start <= max) {
> > start = mb_find_next_zero_bit(bitmap, max + 1, start);
> > if (start > max)
> > break;
> > next = mb_find_next_bit(bitmap, max + 1, start);
> > + /* when only one segment, there is no need to alloc entry */
> > + noalloc = (free_count == 0) && (next >= max);
> >
> > if ((next - start) >= minblocks) {
> > - int ret = ext4_trim_extent(sb, start, next - start, e4b);
> > + int ret = ext4_trim_extent(sb, start, next - start, noalloc, e4b,
> > + &discard_bio, &entry);
> >
> > - if (ret && ret != -EOPNOTSUPP)
> > + if (ret < 0)
> > break;
> > + if (entry)
> > + list_add_tail(&entry->efd_list, &discard_data_list);
> > count += next - start;
> > }
> > free_count += next - start;
> > @@ -6863,6 +6885,21 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
> > if ((e4b->bd_info->bb_free - free_count) < minblocks)
> > break;
> > }
> > + if (discard_bio) {
> > + ext4_unlock_group(sb, e4b->bd_group);
> > + submit_bio_wait(discard_bio);
> > + bio_put(discard_bio);
> > + ext4_lock_group(sb, e4b->bd_group);
> > + }
> > + blk_finish_plug(&plug);
> > +
> > + if (noalloc)
> > + mb_free_blocks(NULL, e4b, bak, free_count);
> > +
> > + list_for_each_entry_safe(fd, nfd, &discard_data_list, efd_list) {
> > + mb_free_blocks(NULL, e4b, fd->efd_start_cluster, fd->efd_count);
> > + kmem_cache_free(ext4_free_data_cachep, fd);
> > + }
> >
> > return count;
> > }
>
> With the new version, I don't see big difference from my test.
>
> Thanks,
> Guoqing
Powered by blists - more mailing lists