[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dad171e1-cacf-e430-e91f-649ebeab605b@google.com>
Date: Thu, 1 Jun 2023 22:11:25 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Jann Horn <jannh@...gle.com>
cc: Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Mike Rapoport <rppt@...nel.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Qi Zheng <zhengqi.arch@...edance.com>,
Yang Shi <shy828301@...il.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Peter Xu <peterx@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Alistair Popple <apopple@...dia.com>,
Ralph Campbell <rcampbell@...dia.com>,
Ira Weiny <ira.weiny@...el.com>,
Steven Price <steven.price@....com>,
SeongJae Park <sj@...nel.org>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Zack Rusin <zackr@...are.com>, Jason Gunthorpe <jgg@...pe.ca>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Minchan Kim <minchan@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
Song Liu <song@...nel.org>,
Thomas Hellstrom <thomas.hellstrom@...ux.intel.com>,
Russell King <linux@...linux.org.uk>,
"David S. Miller" <davem@...emloft.net>,
Michael Ellerman <mpe@...erman.id.au>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Claudio Imbrenda <imbrenda@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
linux-arm-kernel@...ts.infradead.org, sparclinux@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-s390@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 10/12] mm/khugepaged: collapse_pte_mapped_thp() with
mmap_read_lock()
On Wed, 31 May 2023, Jann Horn wrote:
> On Mon, May 29, 2023 at 8:26 AM Hugh Dickins <hughd@...gle.com> wrote:
> > Bring collapse_and_free_pmd() back into collapse_pte_mapped_thp().
> > It does need mmap_read_lock(), but it does not need mmap_write_lock(),
> > nor vma_start_write() nor i_mmap lock nor anon_vma lock. All racing
> > paths are relying on pte_offset_map_lock() and pmd_lock(), so use those.
>
> I think there's a weirdness in the existing code, and this change
> probably turns that into a UAF bug.
>
> collapse_pte_mapped_thp() can be called on an address that might not
> be associated with a VMA anymore, and after this change, the page
> tables for that address might be in the middle of page table teardown
> in munmap(), right? The existing mmap_write_lock() guards against
> concurrent munmap() (so in the old code we are guaranteed to either
> see a normal VMA or not see the page tables anymore), but
> mmap_read_lock() only guards against the part of munmap() up to the
> mmap_write_downgrade() in do_vmi_align_munmap(), and unmap_region()
> (including free_pgtables()) happens after that.
Excellent point, thank you. Don't let anyone overhear us, but I have
to confess to you that that mmap_write_downgrade() has never impinged
forcefully enough on my consciousness: it's still my habit to think of
mmap_lock as exclusive over free_pgtables(), and I've not encountered
this bug in my testing.
Right, I'll gladly incorporate your collapse_pte_mapped_thp()
rearrangement below. And am reassured to realize that by removing
mmap_lock dependence elsewhere, I won't have got it wrong in other places.
Thanks,
Hugh
>
> So we can now enter collapse_pte_mapped_thp() and race with concurrent
> free_pgtables() such that a PUD disappears under us while we're
> walking it or something like that:
>
>
> int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
> bool install_pmd)
> {
> struct mmu_notifier_range range;
> unsigned long haddr = addr & HPAGE_PMD_MASK;
> struct vm_area_struct *vma = vma_lookup(mm, haddr); // <<< returns NULL
> struct page *hpage;
> pte_t *start_pte, *pte;
> pmd_t *pmd, pgt_pmd;
> spinlock_t *pml, *ptl;
> int nr_ptes = 0, result = SCAN_FAIL;
> int i;
>
> mmap_assert_locked(mm);
>
> /* Fast check before locking page if already PMD-mapped */
> result = find_pmd_or_thp_or_none(mm, haddr, &pmd); // <<< PUD UAF in here
> if (result == SCAN_PMD_MAPPED)
> return result;
>
> if (!vma || !vma->vm_file || // <<< bailout happens too late
> !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
> return SCAN_VMA_CHECK;
>
>
> I guess the right fix here is to make sure that at least the basic VMA
> revalidation stuff (making sure there still is a VMA covering this
> range) happens before find_pmd_or_thp_or_none()? Like:
>
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 301c0e54a2ef..5db365587556 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1481,15 +1481,15 @@ int collapse_pte_mapped_thp(struct mm_struct
> *mm, unsigned long addr,
>
> mmap_assert_locked(mm);
>
> + if (!vma || !vma->vm_file ||
> + !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
> + return SCAN_VMA_CHECK;
> +
> /* Fast check before locking page if already PMD-mapped */
> result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
> if (result == SCAN_PMD_MAPPED)
> return result;
>
> - if (!vma || !vma->vm_file ||
> - !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
> - return SCAN_VMA_CHECK;
> -
> /*
> * If we are here, we've succeeded in replacing all the native pages
> * in the page cache with a single hugepage. If a mm were to fault-in
>
Powered by blists - more mailing lists