lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <afce1bdf-6b5f-3393-cafa-81148277773d@redhat.com> Date: Wed, 30 Nov 2022 11:24:34 +0100 From: David Hildenbrand <david@...hat.com> To: Peter Xu <peterx@...hat.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org Cc: James Houghton <jthoughton@...gle.com>, Jann Horn <jannh@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>, Andrea Arcangeli <aarcange@...hat.com>, Rik van Riel <riel@...riel.com>, Nadav Amit <nadav.amit@...il.com>, Miaohe Lin <linmiaohe@...wei.com>, Muchun Song <songmuchun@...edance.com>, Mike Kravetz <mike.kravetz@...cle.com> Subject: Re: [PATCH 03/10] mm/hugetlb: Document huge_pte_offset usage On 29.11.22 20:35, Peter Xu wrote: > huge_pte_offset() is potentially a pgtable walker, looking up pte_t* for a > hugetlb address. > > Normally, it's always safe to walk a generic pgtable as long as we're with > the mmap lock held for either read or write, because that guarantees the > pgtable pages will always be valid during the process. > > But it's not true for hugetlbfs, especially shared: hugetlbfs can have its > pgtable freed by pmd unsharing, it means that even with mmap lock held for > current mm, the PMD pgtable page can still go away from under us if pmd > unsharing is possible during the walk. > > So we have two ways to make it safe even for a shared mapping: > > (1) If we're with the hugetlb vma lock held for either read/write, it's > okay because pmd unshare cannot happen at all. > > (2) If we're with the i_mmap_rwsem lock held for either read/write, it's > okay because even if pmd unshare can happen, the pgtable page cannot > be freed from under us. > > Document it. > > Signed-off-by: Peter Xu <peterx@...hat.com> > --- > include/linux/hugetlb.h | 32 ++++++++++++++++++++++++++++++++ > 1 file changed, 32 insertions(+) > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index 551834cd5299..81efd9b9baa2 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -192,6 +192,38 @@ extern struct list_head huge_boot_pages; > > pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, > unsigned long addr, unsigned long sz); > +/* > + * huge_pte_offset(): Walk the hugetlb pgtable until the last level PTE. > + * Returns the pte_t* if found, or NULL if the address is not mapped. > + * > + * Since this function will walk all the pgtable pages (including not only > + * high-level pgtable page, but also PUD entry that can be unshared > + * concurrently for VM_SHARED), the caller of this function should be > + * responsible of its thread safety. One can follow this rule: > + * > + * (1) For private mappings: pmd unsharing is not possible, so it'll > + * always be safe if we're with the mmap sem for either read or write. > + * This is normally always the case, IOW we don't need to do anything > + * special. Maybe worth mentioning that hugetlb_vma_lock_read() and friends already optimize for private mappings, to not take the VMA lock if not required. Was happy to spot that optimization in there already :) -- Thanks, David / dhildenb
Powered by blists - more mailing lists