[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170307140453.GB2412@node>
Date: Tue, 7 Mar 2017 17:04:53 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Minchan Kim <minchan@...nel.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Hillf Danton <hillf.zj@...baba-inc.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
'Andrea Arcangeli' <aarcange@...hat.com>,
'Andrew Morton' <akpm@...ux-foundation.org>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/4] thp: fix MADV_DONTNEED vs. MADV_FREE race
On Mon, Mar 06, 2017 at 10:44:46AM +0900, Minchan Kim wrote:
> Hello, Kirill,
>
> On Fri, Mar 03, 2017 at 01:26:36PM +0300, Kirill A. Shutemov wrote:
> > On Fri, Mar 03, 2017 at 01:35:11PM +0800, Hillf Danton wrote:
> > >
> > > On March 02, 2017 11:11 PM Kirill A. Shutemov wrote:
> > > >
> > > > Basically the same race as with numa balancing in change_huge_pmd(), but
> > > > a bit simpler to mitigate: we don't need to preserve dirty/young flags
> > > > here due to MADV_FREE functionality.
> > > >
> > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > > > Cc: Minchan Kim <minchan@...nel.org>
> > > > ---
> > > > mm/huge_memory.c | 2 --
> > > > 1 file changed, 2 deletions(-)
> > > >
> > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > > index bb2b3646bd78..324217c31ec9 100644
> > > > --- a/mm/huge_memory.c
> > > > +++ b/mm/huge_memory.c
> > > > @@ -1566,8 +1566,6 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > > > deactivate_page(page);
> > > >
> > > > if (pmd_young(orig_pmd) || pmd_dirty(orig_pmd)) {
> > > > - orig_pmd = pmdp_huge_get_and_clear_full(tlb->mm, addr, pmd,
> > > > - tlb->fullmm);
> > > > orig_pmd = pmd_mkold(orig_pmd);
> > > > orig_pmd = pmd_mkclean(orig_pmd);
> > > >
> > > $ grep -n set_pmd_at linux-4.10/arch/powerpc/mm/pgtable-book3s64.c
> > >
> > > /*
> > > * set a new huge pmd. We should not be called for updating
> > > * an existing pmd entry. That should go via pmd_hugepage_update.
> > > */
> > > void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> >
> > +Aneesh.
> >
> > Urgh... Power is special again.
> >
> > I think this should work fine.
> >
> > From 056914fa025992c0a2212aee057c26307ce60238 Mon Sep 17 00:00:00 2001
> > From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> > Date: Thu, 2 Mar 2017 16:47:45 +0300
> > Subject: [PATCH] thp: fix MADV_DONTNEED vs. MADV_FREE race
> >
> > Basically the same race as with numa balancing in change_huge_pmd(), but
> > a bit simpler to mitigate: we don't need to preserve dirty/young flags
> > here due to MADV_FREE functionality.
>
> Could you elaborate a bit more here rather than relying on other
> patch's description?
Okay, updated patch is below.
> And could you say what happens to the userspace if that race
> happens? When I guess from title "MADV_DONTNEED vs MADV_FREE",
> a page cannot be zapped but marked lazyfree or vise versa? Right?
"Vise versa" part should be fine. The case I'm worry about is that
MADV_DONTNEED would skip the pmd and it will not be cleared.
Userspace expects the area of memory to be clean after MADV_DONTNEED, but
it's not. It can lead to userspace misbehaviour.
>From a0967b0293a6f8053d85785c4d6340e550e849ea Mon Sep 17 00:00:00 2001
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Date: Thu, 2 Mar 2017 16:47:45 +0300
Subject: [PATCH] thp: fix MADV_DONTNEED vs. MADV_FREE race
Both MADV_DONTNEED and MADV_FREE handled with down_read(mmap_sem).
It's critical to not clear pmd intermittently while handling MADV_FREE to
avoid race with MADV_DONTNEED:
CPU0: CPU1:
madvise_free_huge_pmd()
pmdp_huge_get_and_clear_full()
madvise_dontneed()
zap_pmd_range()
pmd_trans_huge(*pmd) == 0 (without ptl)
// skip the pmd
set_pmd_at();
// pmd is re-established
It results in MADV_DONTNEED skipping the pmd, leaving it not cleared. It
violates MADV_DONTNEED interface and can result is userspace misbehaviour.
Basically it's the same race as with numa balancing in change_huge_pmd(),
but a bit simpler to mitigate: we don't need to preserve dirty/young flags
here due to MADV_FREE functionality.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: Minchan Kim <minchan@...nel.org>
---
mm/huge_memory.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 51a8c376d020..3c9ef1104d85 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1568,8 +1568,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
deactivate_page(page);
if (pmd_young(orig_pmd) || pmd_dirty(orig_pmd)) {
- orig_pmd = pmdp_huge_get_and_clear_full(tlb->mm, addr, pmd,
- tlb->fullmm);
+ pmdp_invalidate(vma, addr, pmd);
orig_pmd = pmd_mkold(orig_pmd);
orig_pmd = pmd_mkclean(orig_pmd);
--
Kirill A. Shutemov
Powered by blists - more mailing lists