[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f37b9b6b-730b-09b0-dd6b-5acba53e71e6@linux.alibaba.com>
Date: Fri, 6 Mar 2020 19:58:19 +0800
From: Alex Shi <alex.shi@...ux.alibaba.com>
To: Qian Cai <cai@....pw>, LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: aarcange@...hat.com, daniel.m.jordan@...cle.com,
hannes@...xchg.org, hughd@...gle.com, khlebnikov@...dex-team.ru,
kirill@...temov.name, kravetz@...ibm.com, mhocko@...nel.org,
mm-commits@...r.kernel.org, tj@...nel.org, vdavydov.dev@...il.com,
willy@...radead.org, yang.shi@...ux.alibaba.com
Subject: Re: [failures] mm-vmscan-remove-unnecessary-lruvec-adding.patch
removed from -mm tree
在 2020/3/6 下午5:04, Alex Shi 写道:
>
>
> 在 2020/3/6 上午11:32, Qian Cai 写道:
>>
>>> On Mar 5, 2020, at 9:50 PM, akpm@...ux-foundation.org wrote:
>>>
>>>
>>> The patch titled
>>> Subject: mm/vmscan: remove unnecessary lruvec adding
>>> has been removed from the -mm tree. Its filename was
>>> mm-vmscan-remove-unnecessary-lruvec-adding.patch
>>>
>>> This patch was dropped because it had testing failures
>> Andrew, do you have more information about this failure? I hit a bug
>> here under memory pressure and am wondering if this is related
>> which might save me some time digging…
>>
>> [ 4389.727184][ T6600] mem_cgroup_update_lru_size(00000000bb31aaed, 0, -7): lru_size -1
>
> This bug seems failed due to a update_lru_size() missing or misplace, but
> what's I changed on this patch seems unlike to cause this bug.
>
> Anyway, Qian, could you do me a favor to remove this patch and try again?
Compare to this patch's change, the 'c8cba0cc2a80 mm/thp: narrow lru locking' is more
likely bad. Maybe it's due to lru unlock was moved before ClearPageCompound() from
before remap_page(head); guess this unlock should be move after ClearPageCompound or
move back to origin place.
But I still can not reproduce this bug. Awkward!
Alex
---
line 2605 mm/huge_memory.c:
spin_unlock_irqrestore(&pgdat->lru_lock, flags);
ClearPageCompound(head);
split_page_owner(head, HPAGE_PMD_ORDER);
/* See comment in __split_huge_page_tail() */
if (PageAnon(head)) {
/* Additional pin to swap cache */
if (PageSwapCache(head)) {
page_ref_add(head, 2);
xa_unlock(&swap_cache->i_pages);
} else {
page_ref_inc(head);
}
} else {
/* Additional pin to page cache */
page_ref_add(head, 2);
xa_unlock(&head->mapping->i_pages);
}
remap_page(head);
Powered by blists - more mailing lists