[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+EHjTys1a788HiLnBYu5yySOQ4BKPFxccXhO8P4dLnUCgBUQA@mail.gmail.com>
Date: Tue, 29 Mar 2022 09:51:00 +0100
From: Fuad Tabba <tabba@...gle.com>
To: Kalesh Singh <kaleshsingh@...gle.com>
Cc: will@...nel.org, maz@...nel.org, qperret@...gle.com,
surenb@...gle.com, kernel-team@...roid.com,
James Morse <james.morse@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Catalin Marinas <catalin.marinas@....com>,
Mark Rutland <mark.rutland@....com>,
Mark Brown <broonie@...nel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Peter Collingbourne <pcc@...gle.com>,
"Madhavan T. Venkataraman" <madvenka@...ux.microsoft.com>,
Andrew Scull <ascull@...gle.com>,
Ard Biesheuvel <ardb@...nel.org>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 4/8] KVM: arm64: Add guard pages for pKVM (protected
nVHE) hypervisor stack
Hi Kalesh,
On Mon, Mar 14, 2022 at 8:04 PM Kalesh Singh <kaleshsingh@...gle.com> wrote:
>
> Map the stack pages in the flexible private VA range and allocate
> guard pages below the stack as unbacked VA space. The stack is aligned
> so that any valid stack address has PAGE_SHIFT bit as 1 - this is used
> for overflow detection (implemented in a subsequent patch in the series)
>
> Signed-off-by: Kalesh Singh <kaleshsingh@...gle.com>
Tested-by: Fuad Tabba <tabba@...gle.com>
Reviewed-by: Fuad Tabba <tabba@...gle.com>
Thanks,
/fuad
> ---
>
> Changes in v6:
> - Update call to pkvm_alloc_private_va_range() (return val and params)
>
> Changes in v5:
> - Use a single allocation for stack and guard pages to ensure they
> are contiguous, per Marc
>
> Changes in v4:
> - Replace IS_ERR_OR_NULL check with IS_ERR check now that
> pkvm_alloc_private_va_range() returns an error for null
> pointer, per Fuad
>
> Changes in v3:
> - Handle null ptr in IS_ERR_OR_NULL checks, per Mark
>
>
> arch/arm64/kvm/hyp/nvhe/setup.c | 31 ++++++++++++++++++++++++++++---
> 1 file changed, 28 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> index 27af337f9fea..e8d4ea2fcfa0 100644
> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> @@ -99,17 +99,42 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
> return ret;
>
> for (i = 0; i < hyp_nr_cpus; i++) {
> + struct kvm_nvhe_init_params *params = per_cpu_ptr(&kvm_init_params, i);
> + unsigned long hyp_addr;
> +
> start = (void *)kern_hyp_va(per_cpu_base[i]);
> end = start + PAGE_ALIGN(hyp_percpu_size);
> ret = pkvm_create_mappings(start, end, PAGE_HYP);
> if (ret)
> return ret;
>
> - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va;
> - start = end - PAGE_SIZE;
> - ret = pkvm_create_mappings(start, end, PAGE_HYP);
> + /*
> + * Allocate a contiguous HYP private VA range for the stack
> + * and guard page. The allocation is also aligned based on
> + * the order of its size.
> + */
> + ret = pkvm_alloc_private_va_range(PAGE_SIZE * 2, &hyp_addr);
> + if (ret)
> + return ret;
> +
> + /*
> + * Since the stack grows downwards, map the stack to the page
> + * at the higher address and leave the lower guard page
> + * unbacked.
> + *
> + * Any valid stack address now has the PAGE_SHIFT bit as 1
> + * and addresses corresponding to the guard page have the
> + * PAGE_SHIFT bit as 0 - this is used for overflow detection.
> + */
> + hyp_spin_lock(&pkvm_pgd_lock);
> + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, hyp_addr + PAGE_SIZE,
> + PAGE_SIZE, params->stack_pa, PAGE_HYP);
> + hyp_spin_unlock(&pkvm_pgd_lock);
> if (ret)
> return ret;
> +
> + /* Update stack_hyp_va to end of the stack's private VA range */
> + params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE);
> }
>
> /*
> --
> 2.35.1.723.g4982287a31-goog
>
Powered by blists - more mailing lists