lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPFOzZtN1FkoSUm_hXFNO06hxzQ2QN76hWox-x41xwkStVoR=A@mail.gmail.com>
Date: Wed, 10 Jan 2024 09:55:56 +0800
From: Fengnan Chang <changfengnan@...edance.com>
To: Jan Kara <jack@...e.cz>
Cc: tytso@....edu, adilger.kernel@...ger.ca, linux-ext4@...r.kernel.org
Subject: Re: [External] Re: [PATCH v6] ext4: improve trim efficiency

Jan Kara <jack@...e.cz> 于2024年1月9日周二 20:09写道:
>
> On Tue 09-01-24 19:28:07, Fengnan Chang wrote:
> > Jan Kara <jack@...e.cz> 于2024年1月9日周二 01:15写道:
> > >
> > > On Fri 01-09-23 17:28:20, Fengnan Chang wrote:
> > > > In commit a015434480dc("ext4: send parallel discards on commit
> > > > completions"), issue all discard commands in parallel make all
> > > > bios could merged into one request, so lowlevel drive can issue
> > > > multi segments in one time which is more efficiency, but commit
> > > > 55cdd0af2bc5 ("ext4: get discard out of jbd2 commit kthread contex")
> > > > seems broke this way, let's fix it.
> > > >
> > > > In my test:
> > > > 1. create 10 normal files, each file size is 10G.
> > > > 2. deallocate file, punch a 16k holes every 32k.
> > > > 3. trim all fs.
> > > > the time of fstrim fs reduce from 6.7s to 1.3s.
> > > >
> > > > Signed-off-by: Fengnan Chang <changfengnan@...edance.com>
> > >
> > > This seems to have fallen through the cracks... I'm sorry for that.
> > >
> > > >  static int ext4_try_to_trim_range(struct super_block *sb,
> > > >               struct ext4_buddy *e4b, ext4_grpblk_t start,
> > > >               ext4_grpblk_t max, ext4_grpblk_t minblocks)
> > > >  __acquires(ext4_group_lock_ptr(sb, e4b->bd_group))
> > > >  __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
> > > >  {
> > > > -     ext4_grpblk_t next, count, free_count;
> > > > +     ext4_grpblk_t next, count, free_count, bak;
> > > >       void *bitmap;
> > > > +     struct ext4_free_data *entry = NULL, *fd, *nfd;
> > > > +     struct list_head discard_data_list;
> > > > +     struct bio *discard_bio = NULL;
> > > > +     struct blk_plug plug;
> > > > +     ext4_group_t group = e4b->bd_group;
> > > > +     struct ext4_free_extent ex;
> > > > +     bool noalloc = false;
> > > > +     int ret = 0;
> > > > +
> > > > +     INIT_LIST_HEAD(&discard_data_list);
> > > >
> > > >       bitmap = e4b->bd_bitmap;
> > > >       start = max(e4b->bd_info->bb_first_free, start);
> > > >       count = 0;
> > > >       free_count = 0;
> > > >
> > > > +     blk_start_plug(&plug);
> > > >       while (start <= max) {
> > > >               start = mb_find_next_zero_bit(bitmap, max + 1, start);
> > > >               if (start > max)
> > > >                       break;
> > > > +             bak = start;
> > > >               next = mb_find_next_bit(bitmap, max + 1, start);
> > > > -
> > > >               if ((next - start) >= minblocks) {
> > > > -                     int ret = ext4_trim_extent(sb, start, next - start, e4b);
> > > > +                     /* when only one segment, there is no need to alloc entry */
> > > > +                     noalloc = (free_count == 0) && (next >= max);
> > >
> > > Is the single extent case really worth the complications to save one
> > > allocation? I don't think it is but maybe I'm missing something. Otherwise
> > > the patch looks good to me!
> > yeah, it's necessary, if there is only one segment, alloc memory may cause
> > performance regression.
> > Refer to this https://lore.kernel.org/linux-ext4/CALWNXx-6y0=ZDBMicv2qng9pKHWcpJbCvUm9TaRBwg81WzWkWQ@mail.gmail.com/
>
> Ah, thanks for the reference! Then what I'd suggest is something like:
>
>         struct ext4_free_data first_entry;
>         /*
>          * We preallocate the first entry on stack to optimize for the common
>          * case of trimming single extent in each group. It has measurable
>          * performance impact.
>          */
>         struct ext4_free_data *entry = &first_entry;
>
> then when we allocate we do:
>
>                 if (!entry)
>                         entry = kmem_cache_alloc(...)
>                 entry->efd_start_cluster = start;
>                 entry->efd_count = next - start;
>                 list_add_tail(&entry->efd_list, &discard_data_list);
>                 entry = NULL;
>
> and then when freeing we can have:
>
>         list_for_each_entry_safe(fd, nfd, &discard_data_list, efd_list) {
>                 mb_free_blocks(NULL, e4b, fd->efd_start_cluster, fd->efd_count);
>                 if (fd != &first_entry)
>                         kmem_cache_free(ext4_free_data_cachep, fd);
>         }
>
> Then it is more understandable what's going on...
Looks better, I'll modify it in the next version.
Thanks.
>
>                                                                 Honza
> --
> Jan Kara <jack@...e.com>
> SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ