[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1vz7VvQ5zg5KTxk@monkey>
Date: Fri, 28 Oct 2022 08:23:25 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Peter Xu <peterx@...hat.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
David Hildenbrand <david@...hat.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Mina Almasry <almasrymina@...gle.com>,
Rik van Riel <riel@...riel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Wei Chen <harperchen1110@...il.com>, stable@...r.kernel.org
Subject: Re: [PATCH v2] hugetlb: don't delete vma_lock in hugetlb
MADV_DONTNEED processing
On 10/26/22 21:12, Peter Xu wrote:
> On Wed, Oct 26, 2022 at 04:54:01PM -0700, Mike Kravetz wrote:
> > On 10/26/22 17:42, Peter Xu wrote:
> > >
> > > Pure question: can we rely on hugetlb_vm_op_close() to destroy the hugetlb
> > > vma lock?
> > >
> > > I read the comment above, it seems we are trying to avoid racing with pmd
> > > sharing, but I don't see how that could ever happen, since iiuc there
> > > should only be two places that unmaps the vma (final==true):
> > >
> > > (1) munmap: we're holding write lock, so no page fault possible
> > > (2) exit_mmap: we've already reset current->mm so no page fault possible
> > >
> >
> > Thanks for taking a look Peter!
> >
> > The possible sharing we are trying to stop would be initiated by a fault
> > in a different process on the same underlying mapping object (inode). The
> > specific vma in exit processing is still linked into the mapping interval
> > tree. So, even though we call huge_pmd_unshare in the unmap processing (in
> > __unmap_hugepage_range) the sharing could later be initiated by another
> > process.
> >
> > Hope that makes sense. That is also the reason the routine
> > page_table_shareable contains this check:
> >
> > /*
> > * match the virtual addresses, permission and the alignment of the
> > * page table page.
> > *
> > * Also, vma_lock (vm_private_data) is required for sharing.
> > */
> > if (pmd_index(addr) != pmd_index(saddr) ||
> > vm_flags != svm_flags ||
> > !range_in_vma(svma, sbase, s_end) ||
> > !svma->vm_private_data)
> > return 0;
>
> Ah, makes sense. Hmm, then I'm wondering whether hugetlb_vma_lock_free()
> would ever be useful at all? Because remove_vma() (or say, the close()
> hook) seems to always be called after an precedent unmap_vmas().
You are right. hugetlb_vma_lock_free will almost always be a noop when
called from the close hook. It is still 'needed' for vms setup error
pathss.
> > > > +void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start,
> > > > + unsigned long end)
> > > > +{
> > > > + struct mmu_notifier_range range;
> > > > + struct mmu_gather tlb;
> > > > +
> > > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
> > > > + start, end);
> > >
> > > Is mmu_notifier_invalidate_range_start() missing here?
> > >
> >
> > It certainly does look like it. When I created this routine, I was trying to
> > mimic what was done in the current calling path zap_page_range to
> > __unmap_hugepage_range_final. Now when I look at that, I am not seeing
> > a mmu_notifier_invalidate_range_start/end. Am I missing something, or
> > are these missing today?
>
> I'm not sure whether we're looking at the same code base; here it's in
> zap_page_range() itself.
>
> mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
> start, start + size);
> tlb_gather_mmu(&tlb, vma->vm_mm);
> update_hiwater_rss(vma->vm_mm);
> mmu_notifier_invalidate_range_start(&range);
> do {
> unmap_single_vma(&tlb, vma, start, range.end, NULL);
> } while ((vma = mas_find(&mas, end - 1)) != NULL);
> mmu_notifier_invalidate_range_end(&range);
Yes, I missed that. Thanks!
>
> > Do note that we do MMU_NOTIFY_UNMAP in __unmap_hugepage_range.
>
> Hmm, I think we may want CLEAR for zap-only and UNMAP only for unmap.
>
> * @MMU_NOTIFY_UNMAP: either munmap() that unmap the range or a mremap() that
> * move the range
> * @MMU_NOTIFY_CLEAR: clear page table entry (many reasons for this like
> * madvise() or replacing a page by another one, ...).
>
> The other thing is that unmap_vmas() also notifies (same to
> zap_page_range), it looks a duplicated notification if any of them calls
> __unmap_hugepage_range() at last.
The only call into __unmap_hugepage_range() from generic zap/unmap calls
is via __unmap_hugepage_range_final. Other call paths are entirely
within hugetlb code.
> > > > + tlb_gather_mmu(&tlb, vma->vm_mm);
> > > > + update_hiwater_rss(vma->vm_mm);
> > > > +
> > > > + __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0, false);
> > > > +
> > > > + mmu_notifier_invalidate_range_end(&range);
> > > > + tlb_finish_mmu(&tlb);
> > > > +}
> > > > +#endif
> > > > +
> > > > void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> > > > unsigned long end, struct page *ref_page,
> > > > zap_flags_t zap_flags)
> > > > diff --git a/mm/madvise.c b/mm/madvise.c
> > > > index 2baa93ca2310..90577a669635 100644
> > > > --- a/mm/madvise.c
> > > > +++ b/mm/madvise.c
> > > > @@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
> > > > static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
> > > > unsigned long start, unsigned long end)
> > > > {
> > > > - zap_page_range(vma, start, end - start);
> > > > + if (!is_vm_hugetlb_page(vma))
> > > > + zap_page_range(vma, start, end - start);
> > > > + else
> > > > + clear_hugetlb_page_range(vma, start, end);
> > > > return 0;
> > > > }
> > >
> > > This does look a bit unfortunate - zap_page_range() contains yet another
> > > is_vm_hugetlb_page() check (further down in unmap_single_vma), it can be
> > > very confusing on which code path is really handling hugetlb.
> > >
> > > The other mm_users check in v3 doesn't need this change, but was a bit
> > > hackish to me, because IIUC we're clear on the call paths to trigger this
> > > (unmap_vmas), so it seems clean to me to pass that info from the upper
> > > stack.
> > >
> > > Maybe we can have a new zap_flags passed into unmap_single_vma() showing
> > > that it's destroying the vma?
> >
> > I thought about that. However, we would need to start passing the flag
> > here into zap_page_range as this is the beginning of that call down into
> > the hugetlb code where we do not want to remove zap_page_rangethe
> > vma_lock.
>
> Right. I was thinking just attach the new flag in unmap_vmas(). A pesudo
> (not compiled) code attached.
I took your suggestions and came up with a new version of this patch. Not
sure if I love the new zap flag, as it is only used by hugetlb code. I also
added a bool to __unmap_hugepage_range to eliminate the duplicate notification
calls.
>From 15ffe922b60af9f4c19927d5d5aaca75840d0f6c Mon Sep 17 00:00:00 2001
From: Mike Kravetz <mike.kravetz@...cle.com>
Date: Fri, 28 Oct 2022 07:46:50 -0700
Subject: [PATCH v5] hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED
processing
madvise(MADV_DONTNEED) ends up calling zap_page_range() to clear the page
tables associated with the address range. For hugetlb vmas,
zap_page_range will call __unmap_hugepage_range_final. However,
__unmap_hugepage_range_final assumes the passed vma is about to be removed
and deletes the vma_lock to prevent pmd sharing as the vma is on the way
out. In the case of madvise(MADV_DONTNEED) the vma remains, but the
missing vma_lock prevents pmd sharing and could potentially lead to issues
with truncation/fault races.
This issue was originally reported here [1] as a BUG triggered in
page_try_dup_anon_rmap. Prior to the introduction of the hugetlb
vma_lock, __unmap_hugepage_range_final cleared the VM_MAYSHARE flag to
prevent pmd sharing. Subsequent faults on this vma were confused as
VM_MAYSHARE indicates a sharable vma, but was not set so page_mapping was
not set in new pages added to the page table. This resulted in pages that
appeared anonymous in a VM_SHARED vma and triggered the BUG.
Create a new routine clear_hugetlb_page_range() that can be called from
madvise(MADV_DONTNEED) for hugetlb vmas. It has the same setup as
zap_page_range, but does not delete the vma_lock. Also, add a new zap
flag ZAP_FLAG_UNMAP to indicate an unmap call from unmap_vmas(). This
is used to indicate the 'final' unmapping of a vma. The routine
__unmap_hugepage_range to take a notification_needed argument. This is
used to prevent duplicate notifications.
[1] https://lore.kernel.org/lkml/CAO4mrfdLMXsao9RF4fUE8-Wfde8xmjsKrTNMNC9wjUb6JudD0g@mail.gmail.com/
Fixes: 90e7e7f5ef3f ("mm: enable MADV_DONTNEED for hugetlb mappings")
Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
Reported-by: Wei Chen <harperchen1110@...il.com>
Cc: <stable@...r.kernel.org>
---
include/linux/hugetlb.h | 7 ++++
include/linux/mm.h | 3 ++
mm/hugetlb.c | 93 +++++++++++++++++++++++++++++++----------
mm/madvise.c | 5 ++-
mm/memory.c | 2 +-
5 files changed, 86 insertions(+), 24 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 3568b90b397d..badcb277603d 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -158,6 +158,8 @@ long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *,
void unmap_hugepage_range(struct vm_area_struct *,
unsigned long, unsigned long, struct page *,
zap_flags_t);
+void clear_hugetlb_page_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end);
void __unmap_hugepage_range_final(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long start, unsigned long end,
@@ -428,6 +430,11 @@ static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb,
BUG();
}
+static void __maybe_unused clear_hugetlb_page_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+}
+
static inline vm_fault_t hugetlb_fault(struct mm_struct *mm,
struct vm_area_struct *vma, unsigned long address,
unsigned int flags)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 978c17df053e..517c8cc8ccb9 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3464,4 +3464,7 @@ madvise_set_anon_name(struct mm_struct *mm, unsigned long start,
*/
#define ZAP_FLAG_DROP_MARKER ((__force zap_flags_t) BIT(0))
+/* Set in unmap_vmas() to indicate an unmap call. Only used by hugetlb */
+#define ZAP_FLAG_UNMAP ((__force zap_flags_t) BIT(1))
+
#endif /* _LINUX_MM_H */
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4a0289ef09fa..0309a7c0f3bc 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5062,7 +5062,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
unsigned long start, unsigned long end,
- struct page *ref_page, zap_flags_t zap_flags)
+ struct page *ref_page, zap_flags_t zap_flags,
+ bool notification_needed)
{
struct mm_struct *mm = vma->vm_mm;
unsigned long address;
@@ -5087,13 +5088,16 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct
tlb_change_page_size(tlb, sz);
tlb_start_vma(tlb, vma);
- /*
- * If sharing possible, alert mmu notifiers of worst case.
- */
- mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, mm, start,
- end);
- adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end);
- mmu_notifier_invalidate_range_start(&range);
+ if (notification_needed) {
+ /*
+ * If sharing possible, alert mmu notifiers of worst case.
+ */
+ mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, mm,
+ start, end);
+ adjust_range_if_pmd_sharing_possible(vma, &range.start,
+ &range.end);
+ mmu_notifier_invalidate_range_start(&range);
+ }
last_addr_mask = hugetlb_mask_last_page(h);
address = start;
for (; address < end; address += sz) {
@@ -5178,7 +5182,8 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct
if (ref_page)
break;
}
- mmu_notifier_invalidate_range_end(&range);
+ if (notification_needed)
+ mmu_notifier_invalidate_range_end(&range);
tlb_end_vma(tlb, vma);
/*
@@ -5198,29 +5203,72 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct
tlb_flush_mmu_tlbonly(tlb);
}
-void __unmap_hugepage_range_final(struct mmu_gather *tlb,
+static void __unmap_hugepage_range_locking(struct mmu_gather *tlb,
struct vm_area_struct *vma, unsigned long start,
unsigned long end, struct page *ref_page,
zap_flags_t zap_flags)
{
+ bool final = zap_flags & ZAP_FLAG_UNMAP;
+
hugetlb_vma_lock_write(vma);
i_mmap_lock_write(vma->vm_file->f_mapping);
- __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags);
+ __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags,
+ false);
- /*
- * Unlock and free the vma lock before releasing i_mmap_rwsem. When
- * the vma_lock is freed, this makes the vma ineligible for pmd
- * sharing. And, i_mmap_rwsem is required to set up pmd sharing.
- * This is important as page tables for this unmapped range will
- * be asynchrously deleted. If the page tables are shared, there
- * will be issues when accessed by someone else.
- */
- __hugetlb_vma_unlock_write_free(vma);
+ if (final) {
+ /*
+ * Unlock and free the vma lock before releasing i_mmap_rwsem.
+ * When the vma_lock is freed, this makes the vma ineligible
+ * for pmd sharing. And, i_mmap_rwsem is required to set up
+ * pmd sharing. This is important as page tables for this
+ * unmapped range will be asynchrously deleted. If the page
+ * tables are shared, there will be issues when accessed by
+ * someone else.
+ */
+ __hugetlb_vma_unlock_write_free(vma);
+ i_mmap_unlock_write(vma->vm_file->f_mapping);
+ } else {
+ i_mmap_unlock_write(vma->vm_file->f_mapping);
+ hugetlb_vma_unlock_write(vma);
+ }
+}
- i_mmap_unlock_write(vma->vm_file->f_mapping);
+void __unmap_hugepage_range_final(struct mmu_gather *tlb,
+ struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, struct page *ref_page,
+ zap_flags_t zap_flags)
+{
+ __unmap_hugepage_range_locking(tlb, vma, start, end, ref_page,
+ zap_flags);
}
+#ifdef CONFIG_ADVISE_SYSCALLS
+/*
+ * Similar setup as in zap_page_range(). madvise(MADV_DONTNEED) can not call
+ * zap_page_range for hugetlb vmas as __unmap_hugepage_range_final will delete
+ * the associated vma_lock.
+ */
+void clear_hugetlb_page_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end)
+{
+ struct mmu_notifier_range range;
+ struct mmu_gather tlb;
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
+ start, end);
+ adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end);
+ tlb_gather_mmu(&tlb, vma->vm_mm);
+ update_hiwater_rss(vma->vm_mm);
+ mmu_notifier_invalidate_range_start(&range);
+
+ __unmap_hugepage_range_locking(&tlb, vma, start, end, NULL, 0);
+
+ mmu_notifier_invalidate_range_end(&range);
+ tlb_finish_mmu(&tlb);
+}
+#endif
+
void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end, struct page *ref_page,
zap_flags_t zap_flags)
@@ -5228,7 +5276,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
struct mmu_gather tlb;
tlb_gather_mmu(&tlb, vma->vm_mm);
- __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags);
+ __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags,
+ true);
tlb_finish_mmu(&tlb);
}
diff --git a/mm/madvise.c b/mm/madvise.c
index c7105ec6d08c..d8b4d7e56939 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -790,7 +790,10 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
- zap_page_range(vma, start, end - start);
+ if (!is_vm_hugetlb_page(vma))
+ zap_page_range(vma, start, end - start);
+ else
+ clear_hugetlb_page_range(vma, start, end);
return 0;
}
diff --git a/mm/memory.c b/mm/memory.c
index c5599a9279b1..679b702af4ce 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1671,7 +1671,7 @@ void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
{
struct mmu_notifier_range range;
struct zap_details details = {
- .zap_flags = ZAP_FLAG_DROP_MARKER,
+ .zap_flags = ZAP_FLAG_DROP_MARKER | ZAP_FLAG_UNMAP,
/* Careful - we need to zap private pages too! */
.even_cows = true,
};
--
2.37.3
Powered by blists - more mailing lists