[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHVXubjfuKZ1PBYQ8By41OX65YpAma3_kmSL7urT8L0PmMxFnQ@mail.gmail.com>
Date: Mon, 11 Dec 2023 06:52:02 +0100
From: Alexandre Ghiti <alexghiti@...osinc.com>
To: guoren@...nel.org
Cc: paul.walmsley@...ive.com, palmer@...belt.com,
akpm@...ux-foundation.org, catalin.marinas@....com,
willy@...radead.org, david@...hat.com, muchun.song@...ux.dev,
will@...nel.org, peterz@...radead.org, rppt@...nel.org,
paulmck@...nel.org, atishp@...shpatra.org, anup@...infault.org,
alex@...ti.fr, mike.kravetz@...cle.com, dfustini@...libre.com,
wefu@...hat.com, jszhang@...nel.org, falcon@...ylab.org,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
Guo Ren <guoren@...ux.alibaba.com>
Subject: Re: [PATCH] riscv: pgtable: Enhance set_pte to prevent OoO risk
Hi Guo,
On Fri, Dec 8, 2023 at 4:10 PM <guoren@...nel.org> wrote:
>
> From: Guo Ren <guoren@...ux.alibaba.com>
>
> When changing from an invalid pte to a valid one for a kernel page,
> there is no need for tlb_flush. It's okay for the TSO memory model, but
> there is an OoO risk for the Weak one. eg:
>
> sd t0, (a0) // a0 = pte address, pteval is changed from invalid to valid
> ...
> ld t1, (a1) // a1 = va of above pte
>
> If the ld instruction is executed speculatively before the sd
> instruction. Then it would bring an invalid entry into the TLB, and when
> the ld instruction retired, a spurious page fault occurred. Because the
> vmemmap has been ignored by vmalloc_fault, the spurious page fault would
> cause kernel panic.
>
> This patch was inspired by the commit: 7f0b1bf04511 ("arm64: Fix barriers
> used for page table modifications"). For RISC-V, there is no requirement
> in the spec to guarantee all tlb entries are valid and no requirement to
> PTW filter out invalid entries. Of course, micro-arch could give a more
> robust design, but here, use a software fence to guarantee.
>
> Signed-off-by: Guo Ren <guoren@...ux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@...nel.org>
> ---
> arch/riscv/include/asm/pgtable.h | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 294044429e8e..2fae5a5438e0 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -511,6 +511,13 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
> static inline void set_pte(pte_t *ptep, pte_t pteval)
> {
> *ptep = pteval;
> +
> + /*
> + * Only if the new pte is present and kernel, otherwise TLB
> + * maintenance or update_mmu_cache() have the necessary barriers.
> + */
> + if (pte_val(pteval) & (_PAGE_PRESENT | _PAGE_GLOBAL))
> + RISCV_FENCE(rw,rw);
Only a sfence.vma can guarantee that the PTW actually sees a new
mapping, a fence is not enough. That being said, new kernel mappings
(vmalloc ones) are correctly handled in the kernel by using
flush_cache_vmap(). Did you observe something that this patch fixes?
Thanks,
Alex
> }
>
> void flush_icache_pte(pte_t pte);
> --
> 2.40.1
>
Powered by blists - more mailing lists