[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8141c2df-5643-4ba9-42a5-5b536517cdee@oracle.com>
Date: Fri, 17 Jun 2016 08:39:02 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: Michal Hocko <mhocko@...nel.org>,
zhongjiang <zhongjiang@...wei.com>, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: fix account pmd page to the process
On 06/17/2016 05:25 AM, Kirill A. Shutemov wrote:
>
> From fd22922e7b4664e83653a84331f0a95b985bff0c Mon Sep 17 00:00:00 2001
> From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> Date: Fri, 17 Jun 2016 15:07:03 +0300
> Subject: [PATCH] hugetlb: fix nr_pmds accounting with shared page tables
>
> We account HugeTLB's shared page table to all processes who share it.
> The accounting happens during huge_pmd_share().
>
> If somebody populates pud entry under us, we should decrease pagetable's
> refcount and decrease nr_pmds of the process.
>
> By mistake, I increase nr_pmds again in this case. :-/
> It will lead to "BUG: non-zero nr_pmds on freeing mm: 2" on process'
> exit.
>
> Let's fix this by increasing nr_pmds only when we're sure that the page
> table will be used.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Nice,
Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
I agree that we do not necessarily need a back port. I have not seen
reports of people experiencing this race and seeing the BUG (on mm
tear-down).
zhongjiang, did someone actually hit the BUG? Or, did you find it by
code examination?
--
Mike Kravetz
> Reported-by: zhongjiang <zhongjiang@...wei.com>
> Fixes: dc6c9a35b66b ("mm: account pmd page tables to the process")
> Cc: <stable@...r.kernel.org> [4.0+]
> ---
> mm/hugetlb.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index e197cd7080e6..ed6a537f0878 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4216,7 +4216,6 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
> if (saddr) {
> spte = huge_pte_offset(svma->vm_mm, saddr);
> if (spte) {
> - mm_inc_nr_pmds(mm);
> get_page(virt_to_page(spte));
> break;
> }
> @@ -4231,9 +4230,9 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
> if (pud_none(*pud)) {
> pud_populate(mm, pud,
> (pmd_t *)((unsigned long)spte & PAGE_MASK));
> + mm_inc_nr_pmds(mm);
> } else {
> put_page(virt_to_page(spte));
> - mm_inc_nr_pmds(mm);
> }
> spin_unlock(ptl);
> out:
>
Powered by blists - more mailing lists