[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc26625b-6658-c078-76d2-7e975a04b1d4@ghiti.fr>
Date: Tue, 1 Aug 2023 15:56:06 +0200
From: Alexandre Ghiti <alex@...ti.fr>
To: Dylan Jhong <dylan@...estech.com>, linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org, wangkefeng.wang@...wei.com,
tongtiangen@...wei.com, guoren@...nel.org,
sergey.matyukevich@...tacore.com, gregkh@...uxfoundation.org,
ajones@...tanamicro.com, aou@...s.berkeley.edu, palmer@...belt.com,
paul.walmsley@...ive.com, conor.dooley@...rochip.com
Cc: x5710999x@...il.com, tim609@...estech.com, cl634@...estech.com,
ycliang@...estech.com
Subject: Re: [PATCH] riscv: Flush stale TLB entry with VMAP_STACK enabled
Hi Dylan,
On 01/08/2023 11:09, Dylan Jhong wrote:
> When VMAP_STACK is enabled, the kernel stack will be obtained through
> vmalloc(). Normally, we rely on the logic in vmalloc_fault() to update stale
> P*D entries covering the vmalloc space in a task's page tables when it first
> accesses the problematic region.
I guess that's for rv32 right? Because vmalloc_fault() has been removed
for rv64 in 6.5.
Here you describe the issue as being caused by the vmap stack being in a
new PGD which then needs a page table synchronization in
vmalloc_fault(), which can't happen since vmalloc_fault() needs this
same stack in the current page table.
> Unfortunately, this is not sufficient when
> the kernel stack resides in the vmalloc region, because vmalloc_fault() is a
> C function that needs a stack to run. So we need to ensure that these P*D
> entries are up to date *before* the MM switch.
>
> Here's our symptom:
> core 0: A speculative load lead the kernel stack load to the TLB before the
> corresponding kernel stack's page table is created.
> core 1: Create page table mapping of that kernel stack.
> core 0: After a context switch, the kernel attempts to use the stack region.
> However, even if the page table is correct, the stack address mapping
> in the TLB is invalid, leading to subsequent nested exceptions.
But here the problem you describe is different since it seems to be
caused by the TLB caching invalid entries which then needs a sfence.vma
for the page table walker to see the new correct entry.
>
> This fix is inspired by ARM's approach[*1], commit a1c510d0adc6 ("ARM:
> implement support for vmap'ed stacks"), it also performs a TLB flush after
> setting up the page tables in vmalloc().
> Fixes: 31da94c25aea ("riscv: add VMAP_STACK overflow detection")
> Signed-off-by: Dylan Jhong <dylan@...estech.com>
> ---
> arch/riscv/include/asm/page.h | 4 ++++
> arch/riscv/mm/tlbflush.c | 16 ++++++++++++++++
> 2 files changed, 20 insertions(+)
>
> diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
> index 349fad5e35de..c9b080a72855 100644
> --- a/arch/riscv/include/asm/page.h
> +++ b/arch/riscv/include/asm/page.h
> @@ -21,6 +21,10 @@
> #define HPAGE_MASK (~(HPAGE_SIZE - 1))
> #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)
>
> +#ifdef CONFIG_VMAP_STACK
> +#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PTE_MODIFIED
> +#endif
> +
> /*
> * PAGE_OFFSET -- the first address of the first page of memory.
> * When not using MMU this corresponds to the first free page in
> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
> index ef701fa83f36..0799978913ee 100644
> --- a/arch/riscv/mm/tlbflush.c
> +++ b/arch/riscv/mm/tlbflush.c
> @@ -86,3 +86,19 @@ void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
> __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PMD_SIZE);
> }
> #endif
> +
> +#ifdef CONFIG_VMAP_STACK
> +/*
> + * Normally, we rely on the logic in vmalloc_fault() to update stale P*D
> + * entries covering the vmalloc space in a task's page tables when it first
> + * accesses the problematic region. Unfortunately, this is not sufficient when
> + * the kernel stack resides in the vmalloc region, because vmalloc_fault() is a
> + * C function that needs a stack to run. So we need to ensure that these P*D
> + * entries are up to date *before* the MM switch.
> + */
> +void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
> +{
> + if (start < VMALLOC_END && end > VMALLOC_START)
> + flush_tlb_all();
> +}
> +#endif
And if that works for you, I'd say the problem is the latter: the TLB
caching invalid entries, since you don't synchronize the page tables
here. That looks a lot like the patch I proposed here
https://patchwork.kernel.org/project/linux-riscv/patch/20230725132246.817726-1-alexghiti@rivosinc.com/
that implements flush_cache_vmap().
So I'm not mistaken, we have another problem in 32-bit: I guess that in
your example core 0 and core 1 execute in the same address space (ie the
same page table) and a simple sfence.vma gets rid of the invalid entry
and things can go on. But what if 2 page tables are created with the
same vmalloc mappings, one adds a PGD in the vmalloc mapping, then the
other one does not have it in its page table but still allocates its
vmap stack in this new PGD => the latter would never be able to recover
from the vmalloc fault since it needs to update its page table with the
new PGD and it needs the new PGD for that.
Let me know if I'm completely wrong here!
Thanks,
Alex
Powered by blists - more mailing lists