[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86bff55b-d048-1500-cddc-2d53702d7a3b@nvidia.com>
Date: Wed, 7 Dec 2022 15:05:42 -0800
From: John Hubbard <jhubbard@...dia.com>
To: Peter Xu <peterx@...hat.com>
CC: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Muchun Song <songmuchun@...edance.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"James Houghton" <jthoughton@...gle.com>,
Jann Horn <jannh@...gle.com>, Rik van Riel <riel@...riel.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
"David Hildenbrand" <david@...hat.com>,
Nadav Amit <nadav.amit@...il.com>
Subject: Re: [PATCH v2 04/10] mm/hugetlb: Move swap entry handling into vma
lock when faulted
On 12/7/22 14:43, Peter Xu wrote:
> Note that here migration_entry_wait_huge() will release it.
>
> Sorry it's definitely not as straightforward, but this is also something I
> didn't come up with a better solution, because we need the vma lock to
> protect the spinlock, which is used in deep code path of the migration
> code.
>
> That's also why I added a rich comment above, and there's "The vma lock
> will be released there" which is just for that.
>
Yes, OK,
Reviewed-by: John Hubbard <jhubbard@...dia.com>
...and here is some fancy documentation polishing (incremental on top of this
specific patch) if you would like to fold it in, optional but it makes me
happier:
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 49f73677a418..e3bbd4869f68 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5809,6 +5809,10 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx)
}
#endif
+/*
+ * There are a few special cases in which this function returns while still
+ * holding locks. Those are noted inline.
+ */
vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, unsigned int flags)
{
@@ -5851,8 +5855,8 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
/* PTE markers should be handled the same way as none pte */
if (huge_pte_none_mostly(entry))
/*
- * hugetlb_no_page will drop vma lock and hugetlb fault
- * mutex internally, which make us return immediately.
+ * hugetlb_no_page() will release both the vma lock and the
+ * hugetlb fault mutex, so just return directly from that:
*/
return hugetlb_no_page(mm, vma, mapping, idx, address, ptep,
entry, flags);
@@ -5869,10 +5873,11 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
if (!pte_present(entry)) {
if (unlikely(is_hugetlb_entry_migration(entry))) {
/*
- * Release fault lock first because the vma lock is
- * needed to guard the huge_pte_lockptr() later in
- * migration_entry_wait_huge(). The vma lock will
- * be released there.
+ * Release the hugetlb fault lock now, but retain the
+ * vma lock, because it is needed to guard the
+ * huge_pte_lockptr() later in
+ * migration_entry_wait_huge(). The vma lock will be
+ * released there.
*/
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
migration_entry_wait_huge(vma, ptep);
diff --git a/mm/migrate.c b/mm/migrate.c
index d14f1f3ab073..a31df628b938 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -333,16 +333,18 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
}
#ifdef CONFIG_HUGETLB_PAGE
+
+/*
+ * The vma read lock must be held upon entry. Holding that lock prevents either
+ * the pte or the ptl from being freed.
+ *
+ * This function will release the vma lock before returning.
+ */
void __migration_entry_wait_huge(struct vm_area_struct *vma,
pte_t *ptep, spinlock_t *ptl)
{
pte_t pte;
- /*
- * The vma read lock must be taken, which will be released before
- * the function returns. It makes sure the pgtable page (along
- * with its spin lock) not be freed in parallel.
- */
hugetlb_vma_assert_locked(vma);
spin_lock(ptl);
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists