[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJd=RBA53nS70Q7GEeskKFas-hfg4GKmUf=Zut5anSN0P+d1KA@mail.gmail.com>
Date: Thu, 23 Feb 2012 21:05:41 +0800
From: Hillf Danton <dhillf@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Linux-MM <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...e.cz>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH] mm: hugetlb: bail out unmapping after serving reference page
On Thu, Feb 23, 2012 at 5:06 AM, Andrew Morton
<akpm@...ux-foundation.org> wrote:
>
> Perhaps add a little comment to this explaining what's going on?
>
>
> It would be sufficient to do
>
> if (ref_page)
> break;
>
> This is more efficient, and doesn't make people worry about whether
> this value of `page' is the same as the one which
> pte_page(huge_ptep_get()) earlier returned.
>
Hi Andrew
It is re-prepared,
===cut here===
From: Hillf Danton <dhillf@...il.com>
Subject: [PATCH] mm: hugetlb: bail out unmapping after serving reference page
When unmapping given VM range, we could bail out if a reference page is
supplied and is unmapped, which is a minor optimization.
Signed-off-by: Hillf Danton <dhillf@...il.com>
---
--- a/mm/hugetlb.c Wed Feb 22 19:34:12 2012
+++ b/mm/hugetlb.c Thu Feb 23 20:13:06 2012
@@ -2280,6 +2280,10 @@ void __unmap_hugepage_range(struct vm_ar
if (pte_dirty(pte))
set_page_dirty(page);
list_add(&page->lru, &page_list);
+
+ /* Bail out after unmapping reference page if supplied */
+ if (ref_page)
+ break;
}
spin_unlock(&mm->page_table_lock);
flush_tlb_range(vma, start, end);
--
> Why do we evaluate `page' twice inside that loop anyway? And why do we
> check for huge_pte_none() twice? It looks all messed up.
>
and a follow-up cleanup also attached.
Thanks
Hillf
===cut here===
From: Hillf Danton <dhillf@...il.com>
Subject: [PATCH] mm: hugetlb: cleanup duplicated code in unmapping vm range
When unmapping given VM range, a couple of code duplicate, such as pte_page()
and huge_pte_none(), so a cleanup needed to compact them together.
Signed-off-by: Hillf Danton <dhillf@...il.com>
---
--- a/mm/hugetlb.c Thu Feb 23 20:13:06 2012
+++ b/mm/hugetlb.c Thu Feb 23 20:30:16 2012
@@ -2245,16 +2245,23 @@ void __unmap_hugepage_range(struct vm_ar
if (huge_pmd_unshare(mm, &address, ptep))
continue;
+ pte = huge_ptep_get(ptep);
+ if (huge_pte_none(pte))
+ continue;
+
+ /*
+ * HWPoisoned hugepage is already unmapped and dropped reference
+ */
+ if (unlikely(is_hugetlb_entry_hwpoisoned(pte)))
+ continue;
+
+ page = pte_page(pte);
/*
* If a reference page is supplied, it is because a specific
* page is being unmapped, not a range. Ensure the page we
* are about to unmap is the actual page of interest.
*/
if (ref_page) {
- pte = huge_ptep_get(ptep);
- if (huge_pte_none(pte))
- continue;
- page = pte_page(pte);
if (page != ref_page)
continue;
@@ -2267,16 +2274,6 @@ void __unmap_hugepage_range(struct vm_ar
}
pte = huge_ptep_get_and_clear(mm, address, ptep);
- if (huge_pte_none(pte))
- continue;
-
- /*
- * HWPoisoned hugepage is already unmapped and dropped reference
- */
- if (unlikely(is_hugetlb_entry_hwpoisoned(pte)))
- continue;
-
- page = pte_page(pte);
if (pte_dirty(pte))
set_page_dirty(page);
list_add(&page->lru, &page_list);
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists