[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7d741701-8966-45f1-8404-4b3618d44ea4@sifive.com>
Date: Fri, 16 Aug 2024 11:05:50 -0500
From: Samuel Holland <samuel.holland@...ive.com>
To: Anup Patel <apatel@...tanamicro.com>
Cc: Palmer Dabbelt <palmer@...belt.com>, linux-riscv@...ts.infradead.org,
devicetree@...r.kernel.org, Catalin Marinas <catalin.marinas@....com>,
linux-kernel@...r.kernel.org, Anup Patel <anup@...infault.org>,
Conor Dooley <conor@...nel.org>, kasan-dev@...glegroups.com,
Atish Patra <atishp@...shpatra.org>, Evgenii Stepanov <eugenis@...gle.com>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
Rob Herring <robh+dt@...nel.org>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH v3 04/10] riscv: Add support for userspace pointer masking
Hi Anup,
On 2024-08-16 8:21 AM, Anup Patel wrote:
> On Wed, Aug 14, 2024 at 1:45 PM Samuel Holland
> <samuel.holland@...ive.com> wrote:
>>
>> RISC-V supports pointer masking with a variable number of tag bits
>> (which is called "PMLEN" in the specification) and which is configured
>> at the next higher privilege level.
>>
>> Wire up the PR_SET_TAGGED_ADDR_CTRL and PR_GET_TAGGED_ADDR_CTRL prctls
>> so userspace can request a lower bound on the number of tag bits and
>> determine the actual number of tag bits. As with arm64's
>> PR_TAGGED_ADDR_ENABLE, the pointer masking configuration is
>> thread-scoped, inherited on clone() and fork() and cleared on execve().
>>
>> Signed-off-by: Samuel Holland <samuel.holland@...ive.com>
>> ---
>>
>> Changes in v3:
>> - Rename CONFIG_RISCV_ISA_POINTER_MASKING to CONFIG_RISCV_ISA_SUPM,
>> since it only controls the userspace part of pointer masking
>> - Use IS_ENABLED instead of #ifdef when possible
>> - Use an enum for the supported PMLEN values
>> - Simplify the logic in set_tagged_addr_ctrl()
>>
>> Changes in v2:
>> - Rebase on riscv/linux.git for-next
>> - Add and use the envcfg_update_bits() helper function
>> - Inline flush_tagged_addr_state()
>>
>> arch/riscv/Kconfig | 11 ++++
>> arch/riscv/include/asm/processor.h | 8 +++
>> arch/riscv/include/asm/switch_to.h | 11 ++++
>> arch/riscv/kernel/process.c | 90 ++++++++++++++++++++++++++++++
>> include/uapi/linux/prctl.h | 3 +
>> 5 files changed, 123 insertions(+)
>>
>> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>> index 0f3cd7c3a436..817437157138 100644
>> --- a/arch/riscv/Kconfig
>> +++ b/arch/riscv/Kconfig
>> @@ -512,6 +512,17 @@ config RISCV_ISA_C
>>
>> If you don't know what to do here, say Y.
>>
>> +config RISCV_ISA_SUPM
>> + bool "Supm extension for userspace pointer masking"
>> + depends on 64BIT
>> + default y
>> + help
>> + Add support for pointer masking in userspace (Supm) when the
>> + underlying hardware extension (Smnpm or Ssnpm) is detected at boot.
>> +
>> + If this option is disabled, userspace will be unable to use
>> + the prctl(PR_{SET,GET}_TAGGED_ADDR_CTRL) API.
>> +
>> config RISCV_ISA_SVNAPOT
>> bool "Svnapot extension support for supervisor mode NAPOT pages"
>> depends on 64BIT && MMU
>> diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
>> index 586e4ab701c4..5c4d4fb97314 100644
>> --- a/arch/riscv/include/asm/processor.h
>> +++ b/arch/riscv/include/asm/processor.h
>> @@ -200,6 +200,14 @@ extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val);
>> #define RISCV_SET_ICACHE_FLUSH_CTX(arg1, arg2) riscv_set_icache_flush_ctx(arg1, arg2)
>> extern int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long per_thread);
>>
>> +#ifdef CONFIG_RISCV_ISA_SUPM
>> +/* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */
>> +long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg);
>> +long get_tagged_addr_ctrl(struct task_struct *task);
>> +#define SET_TAGGED_ADDR_CTRL(arg) set_tagged_addr_ctrl(current, arg)
>> +#define GET_TAGGED_ADDR_CTRL() get_tagged_addr_ctrl(current)
>> +#endif
>> +
>> #endif /* __ASSEMBLY__ */
>>
>> #endif /* _ASM_RISCV_PROCESSOR_H */
>> diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h
>> index 9685cd85e57c..94e33216b2d9 100644
>> --- a/arch/riscv/include/asm/switch_to.h
>> +++ b/arch/riscv/include/asm/switch_to.h
>> @@ -70,6 +70,17 @@ static __always_inline bool has_fpu(void) { return false; }
>> #define __switch_to_fpu(__prev, __next) do { } while (0)
>> #endif
>>
>> +static inline void envcfg_update_bits(struct task_struct *task,
>> + unsigned long mask, unsigned long val)
>> +{
>> + unsigned long envcfg;
>> +
>> + envcfg = (task->thread.envcfg & ~mask) | val;
>> + task->thread.envcfg = envcfg;
>> + if (task == current)
>> + csr_write(CSR_ENVCFG, envcfg);
>> +}
>> +
>> static inline void __switch_to_envcfg(struct task_struct *next)
>> {
>> asm volatile (ALTERNATIVE("nop", "csrw " __stringify(CSR_ENVCFG) ", %0",
>> diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
>> index e4bc61c4e58a..1280a7c4a412 100644
>> --- a/arch/riscv/kernel/process.c
>> +++ b/arch/riscv/kernel/process.c
>> @@ -7,6 +7,7 @@
>> * Copyright (C) 2017 SiFive
>> */
>>
>> +#include <linux/bitfield.h>
>> #include <linux/cpu.h>
>> #include <linux/kernel.h>
>> #include <linux/sched.h>
>> @@ -171,6 +172,9 @@ void flush_thread(void)
>> memset(¤t->thread.vstate, 0, sizeof(struct __riscv_v_ext_state));
>> clear_tsk_thread_flag(current, TIF_RISCV_V_DEFER_RESTORE);
>> #endif
>> + if (IS_ENABLED(CONFIG_RISCV_ISA_SUPM) &&
>> + riscv_has_extension_unlikely(RISCV_ISA_EXT_SUPM))
>> + envcfg_update_bits(current, ENVCFG_PMM, ENVCFG_PMM_PMLEN_0);
>
> Seeing a compile warning with this patch on RV32.
>
> linux/arch/riscv/kernel/process.c: In function 'flush_thread':
> linux/arch/riscv/include/asm/csr.h:202:41: warning: conversion from
> 'long long unsigned int' to 'long unsigned int' changes value from
> '12884901888' to '0' [-Woverflow]
> 202 | #define ENVCFG_PMM (_AC(0x3, ULL) << 32)
> | ^~~~~~~~~~~~~~~~~~~~~
> linux/arch/riscv/kernel/process.c:179:45: note: in expansion of macro
> 'ENVCFG_PMM'
> 179 | envcfg_update_bits(current, ENVCFG_PMM,
> ENVCFG_PMM_PMLEN_0);
> | ^~~~~~~~~~
Right, thanks, that's why I needed to use #ifdef here before. I'll switch this
instance back for v4.
Regards,
Samuel
Powered by blists - more mailing lists