[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <011125b5-6343-6f4e-b420-3f152f395980@gmail.com>
Date: Fri, 2 Nov 2018 09:18:05 +0800
From: Joseph Qi <jiangqi903@...il.com>
To: Changwei Ge <ge.changwei@....com>, Larry Chen <lchen@...e.com>,
"mark@...heh.com" <mark@...heh.com>,
"jlbec@...lplan.org" <jlbec@...lplan.org>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"ocfs2-devel@....oracle.com" <ocfs2-devel@....oracle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [Ocfs2-devel] [PATCH V3] ocfs2: fix dead lock caused by
ocfs2_defrag_extent
On 18/11/1 20:34, Changwei Ge wrote:
> Hello Joseph,
>
> On 2018/11/1 20:16, Joseph Qi wrote:
>>
>>
>> On 18/11/1 19:52, Changwei Ge wrote:
>>> Hello Joseph,
>>>
>>> On 2018/11/1 17:01, Joseph Qi wrote:
>>>> Hi Larry,
>>>>
>>>> On 18/11/1 15:14, Larry Chen wrote:
>>>>> ocfs2_defrag_extent may fall into deadlock.
>>>>>
>>>>> ocfs2_ioctl_move_extents
>>>>> ocfs2_ioctl_move_extents
>>>>> ocfs2_move_extents
>>>>> ocfs2_defrag_extent
>>>>> ocfs2_lock_allocators_move_extents
>>>>>
>>>>> ocfs2_reserve_clusters
>>>>> inode_lock GLOBAL_BITMAP_SYSTEM_INODE
>>>>>
>>>>> __ocfs2_flush_truncate_log
>>>>> inode_lock GLOBAL_BITMAP_SYSTEM_INODE
>>>>>
>>>>> As backtrace shows above, ocfs2_reserve_clusters() will call inode_lock
>>>>> against the global bitmap if local allocator has not sufficient cluters.
>>>>> Once global bitmap could meet the demand, ocfs2_reserve_cluster will
>>>>> return success with global bitmap locked.
>>>>>
>>>>> After ocfs2_reserve_cluster(), if truncate log is full,
>>>>> __ocfs2_flush_truncate_log() will definitely fall into deadlock because it
>>>>> needs to inode_lock global bitmap, which has already been locked.
>>>>>
>>>>> To fix this bug, we could remove from ocfs2_lock_allocators_move_extents()
>>>>> the code which intends to lock global allocator, and put the removed code
>>>>> after __ocfs2_flush_truncate_log().
>>>>>
>>>>> ocfs2_lock_allocators_move_extents() is referred by 2 places, one is here,
>>>>> the other does not need the data allocator context, which means this patch
>>>>> does not affect the caller so far.
>>>>>
>>>>> Change log:
>>>>> 1. Correct the function comment.
>>>>> 2. Remove unused argument from ocfs2_lock_meta_allocator_move_extents.
>>>>>
>>>>> Signed-off-by: Larry Chen <lchen@...e.com>
>>>>> ---
>>>>> fs/ocfs2/move_extents.c | 47 ++++++++++++++++++++++++++---------------------
>>>>> 1 file changed, 26 insertions(+), 21 deletions(-)
>>>>>
>>>
>>>> IMO, here clusters_to_move is only for data_ac, since we change this
>>>> function to only handle meta_ac, I'm afraid clusters_to_move related
>>>> logic has to been changed correspondingly.
>>>
>>> I think we can't remove *clusters_to_move* from here as clusters can be reserved latter outsides this function, but we
>>> still have to reserve metadata(extents) in advance.
>>> So we need that argument.
>>>
>> I was not saying just remove it.
>> IIUC, clusters_to_move is for reserving data clusters (for meta_ac, we
>
> Um...
> *cluster_to_move* is not only used for reserving data clusters.
> It is also an effecting input for calculating if existed extents still have enough free records for later
> tree operation like attaching clusters to extents.
>
> Please refer to below code:
> 175 unsigned int max_recs_needed = 2 * extents_to_split + clusters_to_move;
>
IC. It is a bit odd that calculate it here but do the real reserve out.
>
>
>> mostly talk about blocks). Since we have moved data cluster reserve
>> logic out of ocfs2_lock_allocators_move_extents() now, then left
>> clusters_to_move related logic here is odd.
>
> Like my preceding elaboration, it is used for telling if we need more extents.
> Anyway, I think we must keep *cluster_to_move* here as it used to. :-)
>
> Thanks,
> Changwei
>
>
>
>
>>
>>>>> u32 extents_to_split,
>>>>> struct ocfs2_alloc_context **meta_ac,
>>>>> - struct ocfs2_alloc_context **data_ac,
>>>>> int extra_blocks,
>>>>> int *credits)
>>>>> {
>>>>> @@ -192,13 +188,6 @@ static int ocfs2_lock_allocators_move_extents(struct inode *inode,
>>>>> goto out;
>>>>> }
>>>>>
>>>>> - if (data_ac) {
>>>>> - ret = ocfs2_reserve_clusters(osb, clusters_to_move, data_ac);
>>>>> - if (ret) {
>>>>> - mlog_errno(ret);
>>>>> - goto out;
>>>>> - }
>>>>> - }
>>>>>
>>>>> *credits += ocfs2_calc_extend_credits(osb->sb, et->et_root_el);
>>>>>
>>>>> @@ -257,10 +246,10 @@ static int ocfs2_defrag_extent(struct ocfs2_move_extents_context *context,
>>>>> }
>>>>> }
>>>>>
>>>>> - ret = ocfs2_lock_allocators_move_extents(inode, &context->et, *len, 1,
>>>>> - &context->meta_ac,
>>>>> - &context->data_ac,
>>>>> - extra_blocks, &credits);
>>>>> + ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et,
>>>>> + *len, 1,
>>>>> + &context->meta_ac,
>>>>> + extra_blocks, &credits);
>>>>> if (ret) {
>>>>> mlog_errno(ret);
>>>>> goto out;
>>>>> @@ -283,6 +272,21 @@ static int ocfs2_defrag_extent(struct ocfs2_move_extents_context *context,
>>>>> }
>>>>> }
>>>>>
>>>>> + /*
>>>>> + * Make sure ocfs2_reserve_cluster is called after
>>>>> + * __ocfs2_flush_truncate_log, otherwise, dead lock may happen.
>>>>> + *
>>>>> + * If ocfs2_reserve_cluster is called
>>>>> + * before __ocfs2_flush_truncate_log, dead lock on global bitmap
>>>>> + * may happen.
>>>>> + *
>>>>> + */
>>>>> + ret = ocfs2_reserve_clusters(osb, *len, &context->data_ac);
>>>>> + if (ret) {
>>>>> + mlog_errno(ret);
>>>>> + goto out_unlock_mutex;
>>>>> + }
>>>>> +
>>>>> handle = ocfs2_start_trans(osb, credits);
>>>>> if (IS_ERR(handle)) {
>>>>> ret = PTR_ERR(handle);
>>>>> @@ -600,9 +604,10 @@ static int ocfs2_move_extent(struct ocfs2_move_extents_context *context,
>>>>> }
>>>>> }
>>>>>
>>>>> - ret = ocfs2_lock_allocators_move_extents(inode, &context->et, len, 1,
>>>>> - &context->meta_ac,
>>>>> - NULL, extra_blocks, &credits);
>>>>> + ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et,
>>>>> + len, 1,
>>>>> + &context->meta_ac,
>>>>> + extra_blocks, &credits);
>>>>> if (ret) {
>>>>> mlog_errno(ret);
>>>>> goto out;
>>>>>
>>>>
>>>> _______________________________________________
>>>> Ocfs2-devel mailing list
>>>> Ocfs2-devel@....oracle.com
>>>> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
>>>>
>>
Powered by blists - more mailing lists