[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f80c28ea-5c46-4caa-b2b9-6e76f45a7cfb@gmx.com>
Date: Tue, 24 Jun 2025 12:37:41 +0930
From: Qu Wenruo <quwenruo.btrfs@....com>
To: Byungchul Park <byungchul@...com>, Qu Wenruo <quwenruo.btrfs@....com>
Cc: linux-kernel@...r.kernel.org, clm@...com, josef@...icpanda.com,
dsterba@...e.com, linux-btrfs@...r.kernel.org, kernel_team@...ynix.com,
torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
yeoreum.yun@....com, yunseong.kim@...csson.com, gwan-gyeong.mun@...el.com,
harry.yoo@...cle.com, ysk@...lloc.com
Subject: Re: [RFC] DEPT report on around btrfs, unlink, and truncate
在 2025/6/24 11:14, Byungchul Park 写道:
[...]
>> I believe it's from btrfs_clear_buffer_dirty():
>>
>> As we have a for() loop iterating all the folios of a an extent buffer
>> (aka, metadata structure), then clear the dirty flags.
>>
>> The same applies to btrfs_mark_buffer_dirty() -> set_extent_buffer_dirty().
>
> Thanks to Yunseong, I figured out this is the case.
>
>> In that case, the folio is 100% belonging to btree inode thus metadata.
>
> Good to know.
>
> Lastly, is it still good with directly manipulating block devs or
> stacked file system using loopback devices, from the confliction of
> folios and extent_buffers?
Not an expert of the block device page cache, but since the
data/metadata split is fully handled by btrfs inside itself, even with
stack loopback devices the metadata/data IO from btrfs just becomes data
IO of the next layer.
Since we do not deadlock inside the btrfs, then it shouldn't cause
deadlock on lower layer, as the IO just become all data even if the
lower layer is another btrfs.
Thanks,
Qu
>
> If you confirm it, this issue can be closed :-) Thanks in advance.
>
> Byungchul
>
>> Thus the folio lock can not conflict with a data folio, thus there
>> should be no deadlock.
>>
>> Thanks,
>> Qu
>>
Powered by blists - more mailing lists