[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87d2qfck2l.fsf@linux.vnet.ibm.com>
Date: Fri, 19 Jul 2013 08:50:02 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
To: Hillf Danton <dhillf@...il.com>
Cc: Minchan Kim <minchan@...nel.org>, Dave Jones <davej@...hat.com>,
Linux Kernel <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org, Rik van Riel <riel@...hat.com>,
Michal Hocko <mhocko@...e.cz>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: hugepage related lockdep trace.
Hillf Danton <dhillf@...il.com> writes:
> On Fri, Jul 19, 2013 at 1:42 AM, Aneesh Kumar K.V
> <aneesh.kumar@...ux.vnet.ibm.com> wrote:
>> Minchan Kim <minchan@...nel.org> writes:
>>> IMHO, it's a false positive because i_mmap_mutex was held by kswapd
>>> while one in the middle of fault path could be never on kswapd context.
>>>
>>> It seems lockdep for reclaim-over-fs isn't enough smart to identify
>>> between background and direct reclaim.
>>>
>>> Wait for other's opinion.
>>
>> Is that reasoning correct ?. We may not deadlock because hugetlb pages
>> cannot be reclaimed. So the fault path in hugetlb won't end up
>> reclaiming pages from same inode. But the report is correct right ?
>>
>>
>> Looking at the hugetlb code we have in huge_pmd_share
>>
>> out:
>> pte = (pte_t *)pmd_alloc(mm, pud, addr);
>> mutex_unlock(&mapping->i_mmap_mutex);
>> return pte;
>>
>> I guess we should move that pmd_alloc outside i_mmap_mutex. Otherwise
>> that pmd_alloc can result in a reclaim which can call shrink_page_list ?
>>
> Hm, can huge pages be reclaimed, say by kswapd currently?
No we don't reclaim hugetlb pages.
-aneesh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists