[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230105101844.1893104-25-jthoughton@google.com>
Date: Thu, 5 Jan 2023 10:18:22 +0000
From: James Houghton <jthoughton@...gle.com>
To: Mike Kravetz <mike.kravetz@...cle.com>,
Muchun Song <songmuchun@...edance.com>,
Peter Xu <peterx@...hat.com>
Cc: David Hildenbrand <david@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Mina Almasry <almasrymina@...gle.com>,
"Zach O'Keefe" <zokeefe@...gle.com>,
Manish Mishra <manish.mishra@...anix.com>,
Naoya Horiguchi <naoya.horiguchi@....com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Yang Shi <shy828301@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
James Houghton <jthoughton@...gle.com>
Subject: [PATCH 24/46] rmap: update hugetlb lock comment for HGM
The VMA lock is used to prevent high-granularity HugeTLB mappings from
being collapsed while other threads are doing high-granularity page
table walks.
Signed-off-by: James Houghton <jthoughton@...gle.com>
---
include/linux/hugetlb.h | 12 ++++++++++++
mm/rmap.c | 3 ++-
2 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index b7cf45535d64..daf993fdbc38 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -156,6 +156,18 @@ struct file_region {
#endif
};
+/*
+ * The HugeTLB VMA lock is used to synchronize HugeTLB page table walks.
+ * Right now, it is only used for VM_SHARED mappings.
+ * - The read lock is held when we want to stabilize mappings (prevent PMD
+ * unsharing or MADV_COLLAPSE for high-granularity mappings).
+ * - The write lock is held when we want to free mappings (PMD unsharing and
+ * MADV_COLLAPSE for high-granularity mappings).
+ *
+ * Note: For PMD unsharing and MADV_COLLAPSE, the i_mmap_rwsem is held for
+ * writing as well, so page table walkers will also be safe if they hold
+ * i_mmap_rwsem for at least reading. See hugetlb_walk() for more information.
+ */
struct hugetlb_vma_lock {
struct kref refs;
struct rw_semaphore rw_sema;
diff --git a/mm/rmap.c b/mm/rmap.c
index ff7e6c770b0a..076ea77010e5 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -47,7 +47,8 @@
*
* hugetlbfs PageHuge() take locks in this order:
* hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
- * vma_lock (hugetlb specific lock for pmd_sharing)
+ * vma_lock (hugetlb specific lock for pmd_sharing and high-granularity
+ * mapping)
* mapping->i_mmap_rwsem (also used for hugetlb pmd sharing)
* page->flags PG_locked (lock_page)
*/
--
2.39.0.314.g84b9a713c41-goog
Powered by blists - more mailing lists