[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4c9a9dc9-1ea3-8f34-44e8-617680652ca3@kernel.org>
Date: Thu, 23 Nov 2017 23:29:46 +0800
From: Chao Yu <chao@...nel.org>
To: Jan Kara <jack@...e.cz>, Chao Yu <yuchao0@...wei.com>
Cc: jack@...e.com, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] quota: propagate error from __dquot_initialize
On 2017/11/21 21:18, Jan Kara wrote:
> On Fri 17-11-17 10:07:53, Chao Yu wrote:
>> In commit 6184fc0b8dd7 ("quota: Propagate error from ->acquire_dquot()"),
>> we have propagated error from __dquot_initialize to caller, but we forgot
>> to handle such error in add_dquot_ref(), so, currently, during quota
>> accounting information initialization flow, if we failed for some of
>> inodes, we just ignore such error, and do account for others, which is
>> not a good implementation.
>>
>> In this patch, we choose to let user be aware of such error, so after
>> turning on quota successfully, we can make sure all inodes disk usage
>> can be accounted, which will be more reasonable.
>
> Thanks for the patch! One comment below:
>
>> @@ -2371,10 +2377,18 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
>> dqopt->flags |= dquot_state_flag(flags, type);
>> spin_unlock(&dq_state_lock);
>>
>> - add_dquot_ref(sb, type);
>> + error = add_dquot_ref(sb, type);
>> + if (error)
>> + goto out_dquot_flags;
>>
>> return 0;
>> -
>> +out_dquot_flags:
>> + spin_lock(&dq_data_lock);
>> + dqopt->info[type].dqi_flags &= ~DQF_SYS_FILE;
>> + spin_unlock(&dq_data_lock);
>> + spin_lock(&dq_state_lock);
>> + dqopt->flags &= ~(dquot_state_flag(flags, type));
>> + spin_unlock(&dq_state_lock);
>> out_file_init:
>> dqopt->files[type] = NULL;
>> iput(inode);
>
> This bail out path is not correct. You have to go through full quota off at
> this point (dquot_disable() function) as some inodes had already quotas
> initialized and can be using them...
Yes, you're right, have updated in v2, please help to check that.
Thanks,
>
> Honza
>
Powered by blists - more mailing lists