[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160621143701.GA6139@node.shutemov.name>
Date: Tue, 21 Jun 2016 17:37:01 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: zhongjiang <zhongjiang@...wei.com>
Cc: mhocko@...nel.org, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/huge_memory: fix the memory leak due to the race
On Tue, Jun 21, 2016 at 10:05:56PM +0800, zhongjiang wrote:
> From: zhong jiang <zhongjiang@...wei.com>
>
> with great pressure, I run some test cases. As a result, I found
> that the THP is not freed, it is detected by check_mm().
>
> BUG: Bad rss-counter state mm:ffff8827edb70000 idx:1 val:512
>
> Consider the following race :
>
> CPU0 CPU1
> __handle_mm_fault()
> wp_huge_pmd()
> do_huge_pmd_wp_page()
> pmdp_huge_clear_flush_notify()
> (pmd_none = true)
> exit_mmap()
> unmap_vmas()
> zap_pmd_range()
> pmd_none_or_trans_huge_or_clear_bad()
> (result in memory leak)
> set_pmd_at()
>
> because of CPU0 have allocated huge page before pmdp_huge_clear_notify,
> and it make the pmd entry to be null. Therefore, The memory leak can occur.
>
> The patch fix the scenario that the pmd entry can lead to be null.
I don't think the scenario is possible.
exit_mmap() called when all mm users have gone, so no parallel threads
exist.
--
Kirill A. Shutemov
Powered by blists - more mailing lists