[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5049c794-9a92-462c-a455-2bdf94cdebef@huaweicloud.com>
Date: Mon, 9 Dec 2024 16:32:41 +0800
From: Zhang Yi <yi.zhang@...weicloud.com>
To: Jan Kara <jack@...e.cz>
Cc: linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, tytso@....edu, adilger.kernel@...ger.ca,
ritesh.list@...il.com, hch@...radead.org, djwong@...nel.org,
david@...morbit.com, zokeefe@...gle.com, yi.zhang@...wei.com,
chengzhihao1@...wei.com, yukuai3@...wei.com, yangerkun@...wei.com
Subject: Re: [PATCH 12/27] ext4: introduce seq counter for the extent status
entry
On 2024/12/7 0:21, Jan Kara wrote:
> On Fri 06-12-24 16:55:01, Zhang Yi wrote:
>> On 2024/12/4 20:42, Jan Kara wrote:
>>> On Tue 22-10-24 19:10:43, Zhang Yi wrote:
>>>> From: Zhang Yi <yi.zhang@...wei.com>
>>>>
>>>> In the iomap_write_iter(), the iomap buffered write frame does not hold
>>>> any locks between querying the inode extent mapping info and performing
>>>> page cache writes. As a result, the extent mapping can be changed due to
>>>> concurrent I/O in flight. Similarly, in the iomap_writepage_map(), the
>>>> write-back process faces a similar problem: concurrent changes can
>>>> invalidate the extent mapping before the I/O is submitted.
>>>>
>>>> Therefore, both of these processes must recheck the mapping info after
>>>> acquiring the folio lock. To address this, similar to XFS, we propose
>>>> introducing an extent sequence number to serve as a validity cookie for
>>>> the extent. We will increment this number whenever the extent status
>>>> tree changes, thereby preparing for the buffered write iomap conversion.
>>>> Besides, it also changes the trace code style to make checkpatch.pl
>>>> happy.
>>>>
>>>> Signed-off-by: Zhang Yi <yi.zhang@...wei.com>
>>>
>>> Overall using some sequence counter makes sense.
>>>
>>>> diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
>>>> index c786691dabd3..bea4f87db502 100644
>>>> --- a/fs/ext4/extents_status.c
>>>> +++ b/fs/ext4/extents_status.c
>>>> @@ -204,6 +204,13 @@ static inline ext4_lblk_t ext4_es_end(struct extent_status *es)
>>>> return es->es_lblk + es->es_len - 1;
>>>> }
>>>>
>>>> +static inline void ext4_es_inc_seq(struct inode *inode)
>>>> +{
>>>> + struct ext4_inode_info *ei = EXT4_I(inode);
>>>> +
>>>> + WRITE_ONCE(ei->i_es_seq, READ_ONCE(ei->i_es_seq) + 1);
>>>> +}
>>>
>>> This looks potentially dangerous because we can loose i_es_seq updates this
>>> way. Like
>>>
>>> CPU1 CPU2
>>> x = READ_ONCE(ei->i_es_seq)
>>> x = READ_ONCE(ei->i_es_seq)
>>> WRITE_ONCE(ei->i_es_seq, x + 1)
>>> ...
>>> potentially many times
>>> WRITE_ONCE(ei->i_es_seq, x + 1)
>>> -> the counter goes back leading to possibly false equality checks
>>>
>>
>> In my current implementation, I don't think this race condition can
>> happen since all ext4_es_inc_seq() invocations are under
>> EXT4_I(inode)->i_es_lock. So I think it works fine now, or was I
>> missed something?
>
> Hum, as far as I've checked, at least the place in ext4_es_insert_extent()
> where you call ext4_es_inc_seq() doesn't hold i_es_lock (yet). If you meant
> to protect the updates by i_es_lock, then move the call sites and please
> add a comment about it. Also it should be enough to do:
>
> WRITE_ONCE(ei->i_es_seq, ei->i_es_seq + 1);
>
> since we cannot be really racing with other writers.
Oh, sorry, I mentioned the wrong lock. What I intended to say is
i_data_sem.
Currently, all instances where we update the extent status tree will
hold i_data_sem in write mode, preventing any race conditions in these
scenarios. However, we may hold i_data_sem in read mode while loading
a new entry from the extent tree (e.g., ext4_map_query_blocks()). In
these cases, a race condition could occur, but it doesn't modify the
extents, and the new loading range should not be related to the
mapping range we obtained (If it covers with the range we have, it
must first remove the old extents entry, which is protected by
i_data_sem, ensuring that i_es_seq increases by at least one).
Therefore, it should not use stale mapping and trigger any real issues.
However, after thinking about it again, I agree with you that this
approach is subtle, fragile and make us hard to understand, now I think
we should move it into i_es_lock.
>
>>> I think you'll need to use atomic_t and appropriate functions here.
>>>
>>>> @@ -872,6 +879,7 @@ void ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
>>>> BUG_ON(end < lblk);
>>>> WARN_ON_ONCE(status & EXTENT_STATUS_DELAYED);
>>>>
>>>> + ext4_es_inc_seq(inode);
>>>
>>> I'm somewhat wondering: Are extent status tree modifications the right
>>> place to advance the sequence counter? The counter needs to advance
>>> whenever the mapping information changes. This means that we'd be
>>> needlessly advancing the counter (and thus possibly forcing retries) when
>>> we are just adding new information from ordinary extent tree into cache.
>>> Also someone can be doing extent tree manipulations without touching extent
>>> status tree (if the information was already pruned from there).
>>
>> Sorry, I don't quite understand here. IIUC, we can't modify the extent
>> tree without also touching extent status tree; otherwise, the extent
>> status tree will become stale, potentially leading to undesirable and
>> unexpected outcomes later on, as the extent lookup paths rely on and
>> always trust the status tree. If this situation happens, would it be
>> considered a bug? Additionally, I have checked the code but didn't find
>> any concrete cases where this could happen. Was I overlooked something?
>
> What I'm worried about is that this seems a bit fragile because e.g. in
> ext4_collapse_range() we do:
>
> ext4_es_remove_extent(inode, start, EXT_MAX_BLOCKS - start)
> <now go and manipulate the extent tree>
>
> So if somebody managed to sneak in between ext4_es_remove_extent() and
> the extent tree manipulation, he could get a block mapping which is shortly
> after invalidated by the extent tree changes. And as I'm checking now,
> writeback code *can* sneak in there because during extent tree
> manipulations we call ext4_datasem_ensure_credits() which can drop
> i_data_sem to restart a transaction.
>
> Now we do writeout & invalidate page cache before we start to do these
> extent tree dances so I don't see how this could lead to *actual* use
> after free issues but it makes me somewhat nervous. So that's why I'd like
> to have some clear rules from which it is obvious that the counter makes
> sure we do not use stale mappings.
Yes, I see. I think the rule should be as follows:
First, when the iomap infrastructure is creating or querying file
mapping information, we must ensure that the mapping information
always passes through the extent status tree, which means
ext4_map_blocks(), ext4_map_query_blocks(), and
ext4_map_create_blocks() should cache the extent status entries that
we intend to use.
Second, when updating the extent tree, we must hold the i_data_sem in
write mode and update the extent status tree atomically. Additionally,
if we cannot update the extent tree while holding a single i_data_sem,
we should first remove all related extent status entries within the
specified range, then manipulate the extent tree, ensuring that the
extent status entries are always up-to-date if they exist (as
ext4_collapse_range() does).
Finally, if we want to manipulate the extent tree without caching, we
should also remove the extent status entries first.
In summary, ensure that the extent status tree and the extent tree are
consistent under one i_data_sem. If we can't, remove the extent status
entry before manipulating the extent tree.
Do you agree?
>
>>> So I think
>>> needs some very good documentation what are the expectations from the
>>> sequence counter and explanations why they are satisfied so that we don't
>>> break this in the future.
>>
>> Yeah, it's a good suggestion, where do you suggest putting this
>> documentation, how about in the front of extents_status.c?
>
> I think at the function incrementing the counter would be fine.
>
Sure, thanks for pointing this out.
Thanks,
Yi.
Powered by blists - more mailing lists