[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f9c75f0-d0ae-a9ff-df1b-40dd164d74ca@redhat.com>
Date: Mon, 4 Jul 2022 10:40:30 +0800
From: Xiubo Li <xiubli@...hat.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Jeff Layton <jlayton@...nel.org>, idryomov@...il.com,
dhowells@...hat.com, vshankar@...hat.com,
linux-kernel@...r.kernel.org, ceph-devel@...r.kernel.org,
keescook@...omium.org, linux-fsdevel@...r.kernel.org,
linux-cachefs@...hat.com
Subject: Re: [PATCH 1/2] netfs: release the folio lock and put the folio
before retrying
On 7/4/22 10:10 AM, Matthew Wilcox wrote:
> On Mon, Jul 04, 2022 at 09:13:44AM +0800, Xiubo Li wrote:
>> On 7/1/22 6:38 PM, Jeff Layton wrote:
>>> I don't know here... I think it might be better to just expect that when
>>> this function returns an error that the folio has already been unlocked.
>>> Doing it this way will mean that you will lock and unlock the folio a
>>> second time for no reason.
>>>
>>> Maybe something like this instead?
>>>
>>> diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
>>> index 42f892c5712e..8ae7b0f4c909 100644
>>> --- a/fs/netfs/buffered_read.c
>>> +++ b/fs/netfs/buffered_read.c
>>> @@ -353,7 +353,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
>>> trace_netfs_failure(NULL, NULL, ret, netfs_fail_check_write_begin);
>>> if (ret == -EAGAIN)
>>> goto retry;
>>> - goto error;
>>> + goto error_unlocked;
>>> }
>>> }
>>> @@ -418,6 +418,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
>>> error:
>>> folio_unlock(folio);
>>> folio_put(folio);
>>> +error_unlocked:
>>> _leave(" = %d", ret);
>>> return ret;
>>> }
>> Then the "afs" won't work correctly:
>>
>> 377 static int afs_check_write_begin(struct file *file, loff_t pos, unsigned
>> len,
>> 378 struct folio *folio, void **_fsdata)
>> 379 {
>> 380 struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
>> 381
>> 382 return test_bit(AFS_VNODE_DELETED, &vnode->flags) ? -ESTALE : 0;
>> 383 }
>>
>> The "afs" does nothing with the folio lock.
> It's OK to fix AFS too.
>
Okay, will fix it. Thanks!
-- Xiubo
Powered by blists - more mailing lists