[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2e15214b-7e95-4e64-a899-725de12c9037@gmail.com>
Date: Mon, 2 Sep 2024 13:10:47 +0200
From: Luca Stefani <luca.stefani.ge1@...il.com>
To: Qu Wenruo <quwenruo.btrfs@....com>
Cc: Chris Mason <clm@...com>, Josef Bacik <josef@...icpanda.com>,
David Sterba <dsterba@...e.com>, linux-btrfs@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] btrfs: Don't block system suspend during fstrim
On 02/09/24 13:01, Qu Wenruo wrote:
>
>
> 在 2024/9/2 18:47, Qu Wenruo 写道:
> [...]
>> Forgot to mention that, even for error case, we should copy the
>> fstrim_range structure to the ioctl parameter to indicate any progress
>> we made.
>
> Sorry to bother you again, I should have notice it earlier.
>
> There is another possible cause of the huge delay for freezing, that's
> the blkdev_issue_discard() calls inside btrfs_issue_discard() itself.
>
> The problem here is, we can have a very large disk, e.g. 8TiB device,
> mostly empty.
>
> In that case, although we will do the split according to our super block
> locations, the last super block ends at 256G, we can submit a huge
> discard for the range [256G, 8T), causing a super large delay.
>
> So the proper way here is to limit the size of each discard (e.g. limit
> it to 1GiB, just as the chunk stripe size limit), and do the check after
> each 1GiB discard.
>
> So this may be a larger problem than we thought.
>
> I would recommend to split the fix into the following parts:
>
> - Simple small fixes
> Like always update the fstrim_range structure, no matter the return
> value.
Sure, that's already done. Will upload separately.
>
> - Proper discard size split and new freezing checks
I'll try to do the first part, and fallback to the mailing list for help
in case of failure, thanks.
>
> Thanks,
> Qu
>>
>> Thanks,
>> Qu
>>>
>>> Just please update the commit message to explicitly mention that, we
>>> have a free extent discarding phase, which can trim a lot of unallocated
>>> space, and there is no limits on the trim size (unlike the block group
>>> part).
>>>
>>> Thanks,
>>> Qu
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>> >> }
>>>> >> mutex_unlock(&fs_devices->device_list_mutex);
>>>> >
>>>>
>>>
Powered by blists - more mailing lists