[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121017130125.GH5973@mudshark.cambridge.arm.com>
Date: Wed, 17 Oct 2012 14:01:25 +0100
From: Will Deacon <will.deacon@....com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"mhocko@...e.cz" <mhocko@...e.cz>,
"kirill@...temov.name" <kirill@...temov.name>,
Andrea Arcangeli <aarcange@...hat.com>,
Chris Metcalf <cmetcalf@...era.com>,
Steve Capper <Steve.Capper@....com>
Subject: Re: [PATCH v2] mm: thp: Set the accessed flag for old pages on
access fault.
Hi Andrew,
On Tue, Oct 02, 2012 at 11:01:04PM +0100, Andrew Morton wrote:
> On Tue, 2 Oct 2012 17:59:11 +0100
> Will Deacon <will.deacon@....com> wrote:
>
> > On x86 memory accesses to pages without the ACCESSED flag set result in the
> > ACCESSED flag being set automatically. With the ARM architecture a page access
> > fault is raised instead (and it will continue to be raised until the ACCESSED
> > flag is set for the appropriate PTE/PMD).
> >
> > For normal memory pages, handle_pte_fault will call pte_mkyoung (effectively
> > setting the ACCESSED flag). For transparent huge pages, pmd_mkyoung will only
> > be called for a write fault.
> >
> > This patch ensures that faults on transparent hugepages which do not result
> > in a CoW update the access flags for the faulting pmd.
>
> Alas, the code you're altering has changed so much in linux-next that I
> am reluctant to force this fix in there myself. Can you please
> redo/retest/resend? You can do that on 3.7-rc1 if you like, then we
> can feed this into -rc2.
Here's the updated patch against -rc1...
Cheers,
Will
--->8
>From 16b73aea010832cd3d6f22ada60ae055937ef6c0 Mon Sep 17 00:00:00 2001
From: Will Deacon <will.deacon@....com>
Date: Tue, 2 Oct 2012 11:18:52 +0100
Subject: [PATCH] mm: thp: Set the accessed flag for old pages on access fault.
On x86 memory accesses to pages without the ACCESSED flag set result in the
ACCESSED flag being set automatically. With the ARM architecture a page access
fault is raised instead (and it will continue to be raised until the ACCESSED
flag is set for the appropriate PTE/PMD).
For normal memory pages, handle_pte_fault will call pte_mkyoung (effectively
setting the ACCESSED flag). For transparent huge pages, pmd_mkyoung will only
be called for a write fault.
This patch ensures that faults on transparent hugepages which do not result
in a CoW update the access flags for the faulting pmd.
Cc: Chris Metcalf <cmetcalf@...era.com>
Acked-by: Kirill A. Shutemov <kirill@...temov.name>
Reviewed-by: Andrea Arcangeli <aarcange@...hat.com>
Signed-off-by: Steve Capper <steve.capper@....com>
Signed-off-by: Will Deacon <will.deacon@....com>
---
include/linux/huge_mm.h | 2 ++
mm/huge_memory.c | 8 ++++++++
mm/memory.c | 9 ++++++++-
3 files changed, 18 insertions(+), 1 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index b31cb7d..62a0d5a 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -8,6 +8,8 @@ extern int do_huge_pmd_anonymous_page(struct mm_struct *mm,
extern int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
struct vm_area_struct *vma);
+extern void huge_pmd_set_accessed(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmd, int dirty);
extern int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, pmd_t *pmd,
pmd_t orig_pmd);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a863af2..5a7ce24 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -878,6 +878,14 @@ out_free_pages:
goto out;
}
+void huge_pmd_set_accessed(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, int dirty)
+{
+ pmd_t entry = pmd_mkyoung(*pmd);
+ if (pmdp_set_access_flags(vma, address & HPAGE_PMD_MASK, pmd, entry, dirty))
+ update_mmu_cache(vma, address, pmd);
+}
+
int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, pmd_t *pmd, pmd_t orig_pmd)
{
diff --git a/mm/memory.c b/mm/memory.c
index fb135ba..c55c17c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3539,7 +3539,8 @@ retry:
barrier();
if (pmd_trans_huge(orig_pmd)) {
- if (flags & FAULT_FLAG_WRITE &&
+ unsigned int dirty = flags & FAULT_FLAG_WRITE;
+ if (dirty &&
!pmd_write(orig_pmd) &&
!pmd_trans_splitting(orig_pmd)) {
ret = do_huge_pmd_wp_page(mm, vma, address, pmd,
@@ -3552,7 +3553,13 @@ retry:
if (unlikely(ret & VM_FAULT_OOM))
goto retry;
return ret;
+ } else if (pmd_trans_huge_lock(pmd, vma) == 1) {
+ if (likely(pmd_same(*pmd, orig_pmd)))
+ huge_pmd_set_accessed(vma, address, pmd,
+ dirty);
+ spin_unlock(&mm->page_table_lock);
}
+
return 0;
}
}
--
1.7.4.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists