lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7559a11a-f234-418c-880b-34a0ecd1c9a9@kernel.org>
Date: Mon, 9 Sep 2024 16:18:29 +0800
From: Chao Yu <chao@...nel.org>
To: Daeho Jeong <daeho43@...il.com>
Cc: chao@...nel.org, linux-kernel@...r.kernel.org,
 linux-f2fs-devel@...ts.sourceforge.net, kernel-team@...roid.com,
 Daeho Jeong <daehojeong@...gle.com>
Subject: Re: [f2fs-dev] [PATCH 2/7] f2fs: read summary blocks with the correct
 amount for migration_granularity

On 2024/9/7 4:23, Daeho Jeong wrote:
> On Thu, Sep 5, 2024 at 7:56 PM Chao Yu <chao@...nel.org> wrote:
>>
>> On 2024/8/30 5:52, Daeho Jeong wrote:
>>> From: Daeho Jeong <daehojeong@...gle.com>
>>>
>>> Now we do readahead for a full section by not considering
>>> migration_granularity and it triggers unnecessary read. So, make it read
>>> with the correct amount.
>>>
>>> Signed-off-by: Daeho Jeong <daehojeong@...gle.com>
>>> ---
>>>    fs/f2fs/gc.c | 33 ++++++++++++++++++++-------------
>>>    1 file changed, 20 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
>>> index 46e3bc26b78a..b5d3fd40b17a 100644
>>> --- a/fs/f2fs/gc.c
>>> +++ b/fs/f2fs/gc.c
>>> @@ -1708,24 +1708,33 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
>>>        struct blk_plug plug;
>>>        unsigned int segno = start_segno;
>>>        unsigned int end_segno = start_segno + SEGS_PER_SEC(sbi);
>>> +     unsigned int sec_end_segno;
>>>        int seg_freed = 0, migrated = 0;
>>>        unsigned char type = IS_DATASEG(get_seg_entry(sbi, segno)->type) ?
>>>                                                SUM_TYPE_DATA : SUM_TYPE_NODE;
>>>        unsigned char data_type = (type == SUM_TYPE_DATA) ? DATA : NODE;
>>>        int submitted = 0;
>>>
>>> -     if (__is_large_section(sbi))
>>> -             end_segno = rounddown(end_segno, SEGS_PER_SEC(sbi));
>>> +     if (__is_large_section(sbi)) {
>>> +             sec_end_segno = rounddown(end_segno, SEGS_PER_SEC(sbi));
>>>
>>> -     /*
>>> -      * zone-capacity can be less than zone-size in zoned devices,
>>> -      * resulting in less than expected usable segments in the zone,
>>> -      * calculate the end segno in the zone which can be garbage collected
>>> -      */
>>> -     if (f2fs_sb_has_blkzoned(sbi))
>>> -             end_segno -= SEGS_PER_SEC(sbi) -
>>> +             /*
>>> +              * zone-capacity can be less than zone-size in zoned devices,
>>> +              * resulting in less than expected usable segments in the zone,
>>> +              * calculate the end segno in the zone which can be garbage
>>> +              * collected
>>> +              */
>>> +             if (f2fs_sb_has_blkzoned(sbi))
>>> +                     sec_end_segno -= SEGS_PER_SEC(sbi) -
>>>                                        f2fs_usable_segs_in_sec(sbi, segno);
>>>
>>> +             if (gc_type == BG_GC)
>>> +                     end_segno = start_segno + sbi->migration_granularity;
>>> +
>>> +             if (end_segno > sec_end_segno)
>>> +                     end_segno = sec_end_segno;
>>> +     }
>>> +
>>>        sanity_check_seg_type(sbi, get_seg_entry(sbi, segno)->type);
>>>
>>>        /* readahead multi ssa blocks those have contiguous address */
>>> @@ -1762,9 +1771,6 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
>>>
>>>                if (get_valid_blocks(sbi, segno, false) == 0)
>>>                        goto freed;
>>> -             if (gc_type == BG_GC && __is_large_section(sbi) &&
>>> -                             migrated >= sbi->migration_granularity)
>>
>> It seems we change the logic from migrating "migration_granularity" segments which
>> has valid blocks to scanning "migration_granularity" segments and try migrating
>> valid blocks in those segments.
>>
>> IIUC, when background GC recycle sparse zone, it will take gc thread more round,
>> it seems low efficient. How do you think of keeping previous implementation?
> 
> I got your point. However, with zoned devices having 1GB sections, per
> every round, we should
> touch almost 2MB size of ssa block pages, even though we didn't need
> to do it. Maybe, we can introduce

Yes, or can we:
a) just read SSA block for segment which has valid blocks;
b) limit readahead size to a threshold as you proposed.

Thanks,

> another sysfs node like migration_window_limit, which can be set as
> double as migration_granuality by default,
> limiting the size of scanning.
> 
>>
>> Thanks,
>>
>>> -                     goto skip;
>>>                if (!PageUptodate(sum_page) || unlikely(f2fs_cp_error(sbi)))
>>>                        goto skip;
>>>
>>> @@ -1803,7 +1809,8 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
>>>
>>>                if (__is_large_section(sbi))
>>>                        sbi->next_victim_seg[gc_type] =
>>> -                             (segno + 1 < end_segno) ? segno + 1 : NULL_SEGNO;
>>> +                             (segno + 1 < sec_end_segno) ?
>>> +                                     segno + 1 : NULL_SEGNO;
>>>    skip:
>>>                f2fs_put_page(sum_page, 0);
>>>        }
>>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ