lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sfs06b1u.wl-maz@kernel.org>
Date:   Wed, 02 Mar 2022 07:58:05 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     Kalesh Singh <kaleshsingh@...gle.com>
Cc:     will@...nel.org, qperret@...gle.com, tabba@...gle.com,
        surenb@...gle.com, kernel-team@...roid.com,
        James Morse <james.morse@....com>,
        Alexandru Elisei <alexandru.elisei@....com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Catalin Marinas <catalin.marinas@....com>,
        Mark Rutland <mark.rutland@....com>,
        Mark Brown <broonie@...nel.org>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        Peter Collingbourne <pcc@...gle.com>,
        "Madhavan T. Venkataraman" <madvenka@...ux.microsoft.com>,
        Andrew Walbran <qwandor@...gle.com>,
        Andrew Scull <ascull@...gle.com>,
        Ard Biesheuvel <ardb@...nel.org>,
        linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 4/8] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack

On Fri, 25 Feb 2022 03:34:49 +0000,
Kalesh Singh <kaleshsingh@...gle.com> wrote:
> 
> Maps the stack pages in the flexible private VA range and allocates
> guard pages below the stack as unbacked VA space. The stack is aligned
> to twice its size to aid overflow detection (implemented in a subsequent
> patch in the series).
> 
> Signed-off-by: Kalesh Singh <kaleshsingh@...gle.com>
> ---
> 
> Changes in v4:
>   - Replace IS_ERR_OR_NULL check with IS_ERR check now that
>     pkvm_alloc_private_va_range() returns an error for null
>     pointer, per Fuad
> 
> Changes in v3:
>   - Handle null ptr in IS_ERR_OR_NULL checks, per Mark
> 
>  arch/arm64/kvm/hyp/nvhe/setup.c | 25 +++++++++++++++++++++----
>  1 file changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> index 27af337f9fea..1b69a25c1861 100644
> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> @@ -105,11 +105,28 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
>  		if (ret)
>  			return ret;
>  
> -		end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va;
> +		/*
> +		 * Private mappings are allocated upwards from __io_map_base
> +		 * so allocate the guard page first then the stack.
> +		 */
> +		start = (void *)pkvm_alloc_private_va_range(PAGE_SIZE, PAGE_SIZE);
> +		if (IS_ERR(start))
> +			return PTR_ERR(start);
> +
> +		/*
> +		 * The stack is aligned to twice its size to facilitate overflow
> +		 * detection.
> +		 */
> +		end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_pa;
>  		start = end - PAGE_SIZE;
> -		ret = pkvm_create_mappings(start, end, PAGE_HYP);
> -		if (ret)
> -			return ret;
> +		start = (void *)__pkvm_create_private_mapping((phys_addr_t)start,
> +					PAGE_SIZE, PAGE_SIZE * 2, PAGE_HYP);

Similar comments as the previous patch. I'd rather you treat each
stack as a two-page VA, populated by a single page. It would be a lot
clearer, and less fragile.

> +		if (IS_ERR(start))
> +			return PTR_ERR(start);
> +		end = start + PAGE_SIZE;
> +
> +		/* Update stack_hyp_va to end of the stack's private VA range */
> +		per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va = (unsigned long) end;
>  	}
>  
>  	/*

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ