[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkp7+ZrXkoYcVtqrd2mQN3FZ4Y6tyeZCd31Oubz=+esaJQ@mail.gmail.com>
Date: Mon, 28 Nov 2022 11:54:39 -0800
From: Yang Shi <shy828301@...il.com>
To: Jann Horn <jannh@...gle.com>
Cc: security@...nel.org, Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Peter Xu <peterx@...hat.com>,
John Hubbard <jhubbard@...dia.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v4 2/3] mm/khugepaged: Fix GUP-fast interaction by sending IPI
On Mon, Nov 28, 2022 at 10:03 AM Jann Horn <jannh@...gle.com> wrote:
>
> Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> ensure that the page table was not removed by khugepaged in between.
>
> However, lockless_pages_from_mm() still requires that the page table is not
> concurrently freed or reused to store non-PTE data. Otherwise, problems
> can occur because:
>
> - deposited page tables can be freed when a THP page somewhere in the
> mm is removed
> - some architectures store non-PTE information inside deposited page
> tables (see radix__pgtable_trans_huge_deposit())
>
> Additionally, lockless_pages_from_mm() is also somewhat brittle with
> regards to page tables being repeatedly moved back and forth, but
> that shouldn't be an issue in practice.
>
> Fix it by sending IPIs (if the architecture uses
> semi-RCU-style page table freeing) before freeing/reusing page tables.
>
> As noted in mm/gup.c, on configs that define CONFIG_HAVE_FAST_GUP,
> there are two possible cases:
>
> 1. CONFIG_MMU_GATHER_RCU_TABLE_FREE is set, causing
> tlb_remove_table_sync_one() to send an IPI to synchronize with
> lockless_pages_from_mm().
> 2. CONFIG_MMU_GATHER_RCU_TABLE_FREE is unset, indicating that all
> TLB flushes are already guaranteed to send IPIs.
> tlb_remove_table_sync_one() will do nothing, but we've already
> run pmdp_collapse_flush(), which did a TLB flush, which must have
> involved IPIs.
I'm trying to catch up with the discussion after the holiday break. I
understand you switched from always allocating a new page table page
(we decided before) to sending IPIs to serialize against fast-GUP,
this is fine to me.
So the code now looks like:
pmdp_collapse_flush()
sending IPI
But the missing part is how we reached "TLB flushes are already
guaranteed to send IPIs" when CONFIG_MMU_GATHER_RCU_TABLE_FREE is
unset? ARM64 doesn't do it IIRC. Or did I miss something?
>
> Cc: stable@...nel.org
> Fixes: ba76149f47d8 ("thp: khugepaged")
> Acked-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Jann Horn <jannh@...gle.com>
> ---
> v4:
> - added ack from David Hildenbrand
> - made commit message more verbose
>
> include/asm-generic/tlb.h | 4 ++++
> mm/khugepaged.c | 2 ++
> mm/mmu_gather.c | 4 +---
> 3 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 492dce43236ea..cab7cfebf40bd 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -222,12 +222,16 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
> #define tlb_needs_table_invalidate() (true)
> #endif
>
> +void tlb_remove_table_sync_one(void);
> +
> #else
>
> #ifdef tlb_needs_table_invalidate
> #error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE
> #endif
>
> +static inline void tlb_remove_table_sync_one(void) { }
> +
> #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 674b111a24fa7..c3d3ce596bff7 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1057,6 +1057,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> _pmd = pmdp_collapse_flush(vma, address, pmd);
> spin_unlock(pmd_ptl);
> mmu_notifier_invalidate_range_end(&range);
> + tlb_remove_table_sync_one();
>
> spin_lock(pte_ptl);
> result = __collapse_huge_page_isolate(vma, address, pte, cc,
> @@ -1415,6 +1416,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
> lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
>
> pmd = pmdp_collapse_flush(vma, addr, pmdp);
> + tlb_remove_table_sync_one();
> mm_dec_nr_ptes(mm);
> page_table_check_pte_clear_range(mm, addr, pmd);
> pte_free(mm, pmd_pgtable(pmd));
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index add4244e5790d..3a2c3f8cad2fe 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -153,7 +153,7 @@ static void tlb_remove_table_smp_sync(void *arg)
> /* Simply deliver the interrupt */
> }
>
> -static void tlb_remove_table_sync_one(void)
> +void tlb_remove_table_sync_one(void)
> {
> /*
> * This isn't an RCU grace period and hence the page-tables cannot be
> @@ -177,8 +177,6 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
>
> #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>
> -static void tlb_remove_table_sync_one(void) { }
> -
> static void tlb_remove_table_free(struct mmu_table_batch *batch)
> {
> __tlb_remove_table_free(batch);
> --
> 2.38.1.584.g0f3c55d4c2-goog
>
Powered by blists - more mailing lists