lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f28c9a67759cb04157e888b3a71b2ce2@kernel.org>
Date:   Tue, 10 Nov 2020 15:08:54 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     David Brazdil <dbrazdil@...gle.com>
Cc:     kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org, James Morse <james.morse@....com>,
        Julien Thierry <julien.thierry.kdev@...il.com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>, Dennis Zhou <dennis@...nel.org>,
        Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
        Mark Rutland <mark.rutland@....com>,
        Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
        Quentin Perret <qperret@...gle.com>,
        Andrew Scull <ascull@...gle.com>,
        Andrew Walbran <qwandor@...gle.com>, kernel-team@...roid.com
Subject: Re: [PATCH v1 06/24] kvm: arm64: Support per_cpu_ptr in nVHE hyp code

On 2020-11-09 11:32, David Brazdil wrote:
> When compiling with __KVM_NVHE_HYPERVISOR__ redefine per_cpu_offset() 
> to
> __hyp_per_cpu_offset() which looks up the base of the nVHE per-CPU
> region of the given cpu and computes its offset from the
> .hyp.data..percpu section.
> 
> This enables use of per_cpu_ptr() helpers in nVHE hyp code. Until now
> only this_cpu_ptr() was supported by setting TPIDR_EL2.
> 
> Signed-off-by: David Brazdil <dbrazdil@...gle.com>
> ---
>  arch/arm64/include/asm/percpu.h  |  6 ++++++
>  arch/arm64/kernel/image-vars.h   |  3 +++
>  arch/arm64/kvm/hyp/nvhe/Makefile |  3 ++-
>  arch/arm64/kvm/hyp/nvhe/percpu.c | 22 ++++++++++++++++++++++
>  4 files changed, 33 insertions(+), 1 deletion(-)
>  create mode 100644 arch/arm64/kvm/hyp/nvhe/percpu.c
> 
> diff --git a/arch/arm64/include/asm/percpu.h 
> b/arch/arm64/include/asm/percpu.h
> index 1599e17379d8..8f1661603b78 100644
> --- a/arch/arm64/include/asm/percpu.h
> +++ b/arch/arm64/include/asm/percpu.h
> @@ -239,6 +239,12 @@ PERCPU_RET_OP(add, add, ldadd)
>  #define this_cpu_cmpxchg_8(pcp, o, n)	\
>  	_pcp_protect_return(cmpxchg_relaxed, pcp, o, n)
> 
> +#ifdef __KVM_NVHE_HYPERVISOR__
> +extern unsigned long __hyp_per_cpu_offset(unsigned int cpu);
> +#define __per_cpu_offset
> +#define per_cpu_offset(cpu)	__hyp_per_cpu_offset((cpu))
> +#endif
> +
>  #include <asm-generic/percpu.h>
> 
>  /* Redefine macros for nVHE hyp under DEBUG_PREEMPT to avoid its
> dependencies. */
> diff --git a/arch/arm64/kernel/image-vars.h 
> b/arch/arm64/kernel/image-vars.h
> index c615b285ff5b..78a42a7cdb72 100644
> --- a/arch/arm64/kernel/image-vars.h
> +++ b/arch/arm64/kernel/image-vars.h
> @@ -103,6 +103,9 @@ KVM_NVHE_ALIAS(gic_nonsecure_priorities);
>  KVM_NVHE_ALIAS(__start___kvm_ex_table);
>  KVM_NVHE_ALIAS(__stop___kvm_ex_table);
> 
> +/* Array containing bases of nVHE per-CPU memory regions. */
> +KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base);
> +
>  #endif /* CONFIG_KVM */
> 
>  #endif /* __ARM64_KERNEL_IMAGE_VARS_H */
> diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile 
> b/arch/arm64/kvm/hyp/nvhe/Makefile
> index ddde15fe85f2..c45f440cce51 100644
> --- a/arch/arm64/kvm/hyp/nvhe/Makefile
> +++ b/arch/arm64/kvm/hyp/nvhe/Makefile
> @@ -6,7 +6,8 @@
>  asflags-y := -D__KVM_NVHE_HYPERVISOR__
>  ccflags-y := -D__KVM_NVHE_HYPERVISOR__
> 
> -obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o
> host.o hyp-main.o
> +obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o 
> host.o \
> +	 hyp-main.o percpu.o
>  obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o 
> ../entry.o \
>  	 ../fpsimd.o ../hyp-entry.o
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/percpu.c 
> b/arch/arm64/kvm/hyp/nvhe/percpu.c
> new file mode 100644
> index 000000000000..5fd0c5696907
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/nvhe/percpu.c
> @@ -0,0 +1,22 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2020 - Google LLC
> + * Author: David Brazdil <dbrazdil@...gle.com>
> + */
> +
> +#include <asm/kvm_asm.h>
> +#include <asm/kvm_hyp.h>
> +#include <asm/kvm_mmu.h>
> +
> +unsigned long __hyp_per_cpu_offset(unsigned int cpu)
> +{
> +	unsigned long *cpu_base_array;
> +	unsigned long this_cpu_base;
> +
> +	if (cpu >= ARRAY_SIZE(kvm_arm_hyp_percpu_base))
> +		hyp_panic();
> +
> +	cpu_base_array = kern_hyp_va(&kvm_arm_hyp_percpu_base[0]);

There is no guarantee that this will not generate a PC relative
addressing, resulting in kern_hyp_va() being applied twice.

Consider using hyp_symbol_addr() instead, which always does the right
by forcing a PC relative addressing and not subsequently mangling
the address.

> +	this_cpu_base = kern_hyp_va(cpu_base_array[cpu]);
> +	return this_cpu_base - (unsigned long)&__per_cpu_start;

And this is the opposite case: if the compiler generates an absolute
address, you're toast. Yes, this is just as unlikely, but hey...
Same remedy should apply.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ