lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 28 Nov 2022 14:46:19 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Jann Horn <jannh@...gle.com>, security@...nel.org,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Yang Shi <shy828301@...il.com>, Peter Xu <peterx@...hat.com>,
        John Hubbard <jhubbard@...dia.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3 2/3] mm/khugepaged: Fix GUP-fast interaction by sending
 IPI

On 25.11.22 22:37, Jann Horn wrote:
> Since commit 70cbc3cc78a99 ("mm: gup: fix the fast GUP race against THP
> collapse"), the lockless_pages_from_mm() fastpath rechecks the pmd_t to
> ensure that the page table was not removed by khugepaged in between.
> 
> However, lockless_pages_from_mm() still requires that the page table is not
> concurrently freed.

That's an interesting point. For anon THPs, the page table won't get 
immediately freed, but instead will be deposited in the "pgtable list" 
stored alongside the THP.

 From there, it might get withdrawn (pgtable_trans_huge_withdraw()) and

a) Reused as a page table when splitting the THP. That should be fine, 
no garbage in it, simply a page table again.

b) Freed when zapping the THP (zap_deposited_table()). that would be bad.

... but I just realized that e.g., radix__pgtable_trans_huge_deposit 
uses actual page content to link the deposited page tables, which means 
we'd already storing garbage in there when depositing the page, not when 
freeing+reusing the page ....

Maybe worth adding to the description.

> Fix it by sending IPIs (if the architecture uses
> semi-RCU-style page table freeing) before freeing/reusing page tables.
> 
> Cc: stable@...nel.org
> Fixes: ba76149f47d8 ("thp: khugepaged")
> Signed-off-by: Jann Horn <jannh@...gle.com>
> ---
> replaced the mmu_gather-based scheme with an RCU call as suggested by
> Peter Xu
> 
>   include/asm-generic/tlb.h | 4 ++++
>   mm/khugepaged.c           | 2 ++
>   mm/mmu_gather.c           | 4 +---
>   3 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 492dce43236ea..cab7cfebf40bd 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -222,12 +222,16 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
>   #define tlb_needs_table_invalidate() (true)
>   #endif
>   
> +void tlb_remove_table_sync_one(void);
> +
>   #else
>   
>   #ifdef tlb_needs_table_invalidate
>   #error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE
>   #endif
>   
> +static inline void tlb_remove_table_sync_one(void) { }
> +
>   #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>   
>   
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 674b111a24fa7..c3d3ce596bff7 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1057,6 +1057,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>   	_pmd = pmdp_collapse_flush(vma, address, pmd);
>   	spin_unlock(pmd_ptl);
>   	mmu_notifier_invalidate_range_end(&range);
> +	tlb_remove_table_sync_one();
>   
>   	spin_lock(pte_ptl);
>   	result =  __collapse_huge_page_isolate(vma, address, pte, cc,
> @@ -1415,6 +1416,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
>   		lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
>   
>   	pmd = pmdp_collapse_flush(vma, addr, pmdp);
> +	tlb_remove_table_sync_one();
>   	mm_dec_nr_ptes(mm);
>   	page_table_check_pte_clear_range(mm, addr, pmd);
>   	pte_free(mm, pmd_pgtable(pmd));
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index add4244e5790d..3a2c3f8cad2fe 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -153,7 +153,7 @@ static void tlb_remove_table_smp_sync(void *arg)
>   	/* Simply deliver the interrupt */
>   }
>   
> -static void tlb_remove_table_sync_one(void)
> +void tlb_remove_table_sync_one(void)
>   {
>   	/*
>   	 * This isn't an RCU grace period and hence the page-tables cannot be
> @@ -177,8 +177,6 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch)
>   
>   #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>   
> -static void tlb_remove_table_sync_one(void) { }
> -
>   static void tlb_remove_table_free(struct mmu_table_batch *batch)
>   {
>   	__tlb_remove_table_free(batch);

With CONFIG_MMU_GATHER_RCU_TABLE_FREE this will most certainly do the 
right thing. I assume with CONFIG_MMU_GATHER_RCU_TABLE_FREE, the 
assumption is that there will be an implicit IPI.

That implicit IPI has to happen before we deposit. I assume that is 
expected to happen during pmdp_collapse_flush() ?

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ