[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+EHjTy6DJt8Pcfj4JnVhSG0sQ7O09zvOaMP--aRuAsM=8zKUw@mail.gmail.com>
Date: Thu, 24 Feb 2022 12:26:14 +0000
From: Fuad Tabba <tabba@...gle.com>
To: Kalesh Singh <kaleshsingh@...gle.com>
Cc: will@...nel.org, maz@...nel.org, qperret@...gle.com,
surenb@...gle.com, kernel-team@...roid.com,
James Morse <james.morse@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Catalin Marinas <catalin.marinas@....com>,
Mark Rutland <mark.rutland@....com>,
Mark Brown <broonie@...nel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Peter Collingbourne <pcc@...gle.com>,
"Madhavan T. Venkataraman" <madvenka@...ux.microsoft.com>,
Andrew Walbran <qwandor@...gle.com>,
Andrew Scull <ascull@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 3/8] KVM: arm64: Add guard pages for KVM nVHE
hypervisor stack
Hi Kalesh,
On Thu, Feb 24, 2022 at 5:18 AM Kalesh Singh <kaleshsingh@...gle.com> wrote:
>
> Maps the stack pages in the flexible private VA range and allocates
> guard pages below the stack as unbacked VA space. The stack is aligned
> to twice its size to aid overflow detection (implemented in a subsequent
> patch in the series).
>
> Signed-off-by: Kalesh Singh <kaleshsingh@...gle.com>
> ---
>
> Changes in v3:
> - Handle null ptr in IS_ERR_OR_NULL checks, per Mark
>
> arch/arm64/include/asm/kvm_asm.h | 1 +
> arch/arm64/kvm/arm.c | 32 +++++++++++++++++++++++++++++---
> 2 files changed, 30 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index d5b0386ef765..2e277f2ed671 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -169,6 +169,7 @@ struct kvm_nvhe_init_params {
> unsigned long tcr_el2;
> unsigned long tpidr_el2;
> unsigned long stack_hyp_va;
> + unsigned long stack_pa;
> phys_addr_t pgd_pa;
> unsigned long hcr_el2;
> unsigned long vttbr;
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index ecc5958e27fe..7a23630c4a7f 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1541,7 +1541,6 @@ static void cpu_prepare_hyp_mode(int cpu)
> tcr |= (idmap_t0sz & GENMASK(TCR_TxSZ_WIDTH - 1, 0)) << TCR_T0SZ_OFFSET;
> params->tcr_el2 = tcr;
>
> - params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) + PAGE_SIZE);
> params->pgd_pa = kvm_mmu_get_httbr();
> if (is_protected_kvm_enabled())
> params->hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS;
> @@ -1990,14 +1989,41 @@ static int init_hyp_mode(void)
> * Map the Hyp stack pages
> */
> for_each_possible_cpu(cpu) {
> + struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu);
> char *stack_page = (char *)per_cpu(kvm_arm_hyp_stack_page, cpu);
> - err = create_hyp_mappings(stack_page, stack_page + PAGE_SIZE,
> - PAGE_HYP);
> + unsigned long stack_hyp_va, guard_hyp_va;
>
> + /*
> + * Private mappings are allocated downwards from io_map_base
> + * so allocate the stack first then the guard page.
> + *
> + * The stack is aligned to twice its size to facilitate overflow
> + * detection.
> + */
> + err = __create_hyp_private_mapping(__pa(stack_page), PAGE_SIZE,
> + PAGE_SIZE * 2, &stack_hyp_va, PAGE_HYP);
> if (err) {
> kvm_err("Cannot map hyp stack\n");
> goto out_err;
> }
> +
> + /* Allocate unbacked private VA range for stack guard page */
> + guard_hyp_va = hyp_alloc_private_va_range(PAGE_SIZE, PAGE_SIZE);
> + if (IS_ERR_OR_NULL((void *)guard_hyp_va)) {
> + err = guard_hyp_va ? PTR_ERR((void *)guard_hyp_va) : -ENOMEM;
I am a bit confused by this check. hyp_alloc_private_va_range() always
returns ERR_PTR(-ENOMEM) if there's an error. Mark's comment (if I
understood it correctly) was about how you were handling it *in*
hyp_alloc_private_va_range(), rather than calls *to*
hyp_alloc_private_va_range().
> + kvm_err("Cannot allocate hyp stack guard page\n");
> + goto out_err;
> + }
> +
> + /*
> + * Save the stack PA in nvhe_init_params. This will be needed to recreate
> + * the stack mapping in protected nVHE mode. __hyp_pa() won't do the right
> + * thing there, since the stack has been mapped in the flexible private
> + * VA space.
> + */
Nit: These comments go over 80 columns, unlike other comments that
you've added in this file.
Thanks,
/fuad
> + params->stack_pa = __pa(stack_page) + PAGE_SIZE;
> +
> + params->stack_hyp_va = stack_hyp_va + PAGE_SIZE;
> }
>
> for_each_possible_cpu(cpu) {
> --
> 2.35.1.473.g83b2b277ed-goog
>
Powered by blists - more mailing lists