lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mhng-875f43ab-ca1e-4b18-a12d-1a0dc8519b53@palmerdabbelt-glaptop>
Date:   Tue, 16 Mar 2021 21:17:27 -0700 (PDT)
From:   Palmer Dabbelt <palmer@...belt.com>
To:     liu@...yang.me
CC:     waterman@...s.berkeley.edu, liu@...yang.me,
        Paul Walmsley <paul.walmsley@...ive.com>,
        aou@...s.berkeley.edu, Atish Patra <Atish.Patra@....com>,
        Anup Patel <Anup.Patel@....com>, akpm@...ux-foundation.org,
        rppt@...nel.org, wangkefeng.wang@...wei.com,
        greentime.hu@...ive.com, zong.li@...ive.com,
        linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject:     Re: [PATCH] Insert SFENCE.VMA in function set_pte_at for RISCV

On Tue, 09 Mar 2021 22:22:46 PST (-0800), liu@...yang.me wrote:
> From: Jiuyang Liu <liu@...yang.me>
>
> This patch inserts SFENCE.VMA after modifying PTE based on RISC-V
> specification.
>
> arch/riscv/include/asm/pgtable.h:
> 1. implement pte_user, pte_global and pte_leaf to check correspond
> attribute of a pte_t.
>
> 2. insert SFENCE.VMA in set_pte_at based on RISC-V Volume 2, Privileged
> Spec v. 20190608 page 66 and 67:
> If software modifies a non-leaf PTE, it should execute SFENCE.VMA with
> rs1=x0. If any PTE along the traversal path had its G bit set, rs2 must
> be x0; otherwise, rs2 should be set to the ASID for which the
> translation is being modified.
> If software modifies a leaf PTE, it should execute SFENCE.VMA with rs1
> set to a virtual address within the page. If any PTE along the traversal
> path had its G bit set, rs2 must be x0; otherwise, rs2 should be set to
> the ASID for which the translation is being modified.
>
> arch/riscv/include/asm/tlbflush.h:
> 1. implement local_flush_tlb_asid to flush tlb with asid.
>
> Signed-off-by: Jiuyang Liu <liu@...yang.me>
> ---
>  arch/riscv/include/asm/pgtable.h  | 28 ++++++++++++++++++++++++++++
>  arch/riscv/include/asm/tlbflush.h |  8 ++++++++
>  2 files changed, 36 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index ebf817c1bdf4..95f6546ddb5b 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -222,6 +222,16 @@ static inline int pte_write(pte_t pte)
>  	return pte_val(pte) & _PAGE_WRITE;
>  }
>
> +static inline int pte_user(pte_t pte)
> +{
> +	return pte_val(pte) & _PAGE_USER;
> +}
> +
> +static inline int pte_global(pte_t pte)
> +{
> +	return pte_val(pte) & _PAGE_GLOBAL;
> +}
> +
>  static inline int pte_exec(pte_t pte)
>  {
>  	return pte_val(pte) & _PAGE_EXEC;
> @@ -248,6 +258,11 @@ static inline int pte_special(pte_t pte)
>  	return pte_val(pte) & _PAGE_SPECIAL;
>  }
>
> +static inline int pte_leaf(pte_t pte)
> +{
> +	return pte_val(pte) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC);
> +}
> +
>  /* static inline pte_t pte_rdprotect(pte_t pte) */
>
>  static inline pte_t pte_wrprotect(pte_t pte)
> @@ -358,6 +370,18 @@ static inline void set_pte_at(struct mm_struct *mm,
>  		flush_icache_pte(pteval);
>
>  	set_pte(ptep, pteval);
> +
> +	if (pte_present(pteval)) {
> +		if (pte_leaf(pteval)) {
> +			local_flush_tlb_page(addr);
> +		} else {
> +			if (pte_global(pteval))
> +				local_flush_tlb_all();
> +			else
> +				local_flush_tlb_asid();
> +
> +		}
> +	}
>  }
>
>  static inline void pte_clear(struct mm_struct *mm,
> diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
> index 394cfbccdcd9..4b25f51f163d 100644
> --- a/arch/riscv/include/asm/tlbflush.h
> +++ b/arch/riscv/include/asm/tlbflush.h
> @@ -21,6 +21,14 @@ static inline void local_flush_tlb_page(unsigned long addr)
>  {
>  	__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory");
>  }
> +
> +static unsigned long asid;
> +static inline void local_flush_tlb_asid(void)
> +{
> +	asid = csr_read(CSR_SATP) | (SATP_ASID_MASK << SATP_ASID_SHIFT);
> +	__asm__ __volatile__ ("sfence.vma x0, %0" : : "r" (asid) : "memory");
> +}
> +
>  #else /* CONFIG_MMU */
>  #define local_flush_tlb_all()			do { } while (0)
>  #define local_flush_tlb_page(addr)		do { } while (0)

We're trying to avoid this sort of thing, instead relying on the generic kernel
functionality to batch up page table modifications before we issue the fences.
If you're seeing some specific issue then I'd be happy to try and sort out a
fix for it, but this is a bit heavy-handed to use as anything but a last
resort.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ