lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875ytworvy.wl-maz@kernel.org>
Date:   Sun, 17 Oct 2021 11:41:21 +0100
From:   Marc Zyngier <maz@...nel.org>
To:     Quentin Perret <qperret@...gle.com>
Cc:     James Morse <james.morse@....com>,
        Alexandru Elisei <alexandru.elisei@....com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>, Fuad Tabba <tabba@...gle.com>,
        David Brazdil <dbrazdil@...gle.com>,
        linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
        linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH 04/16] KVM: arm64: Introduce kvm_share_hyp()

On Wed, 13 Oct 2021 16:58:19 +0100,
Quentin Perret <qperret@...gle.com> wrote:
> 
> The create_hyp_mappings() function can currently be called at any point
> in time. However, its behaviour in protected mode changes widely
> depending on when it is being called. Prior to KVM init, it is used to
> create the temporary page-table used to bring-up the hypervisor, and
> later on it is transparently turned into a 'share' hypercall when the
> kernel has lost control over the hypervisor stage-1. In order to prepare
> the ground for also unsharing pages with the hypervisor during guest
> teardown, introduce a kvm_share_hyp() function to make it clear in which
> places a share hypercall should be expected, as we will soon need a
> matching unshare hypercall in all those places.
> 
> Signed-off-by: Quentin Perret <qperret@...gle.com>
> ---
>  arch/arm64/include/asm/kvm_mmu.h |  1 +
>  arch/arm64/kvm/arm.c             |  7 +++----
>  arch/arm64/kvm/fpsimd.c          |  4 ++--
>  arch/arm64/kvm/mmu.c             | 19 +++++++++++++------
>  4 files changed, 19 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 02d378887743..185d0f62b724 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -150,6 +150,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v)
>  #include <asm/kvm_pgtable.h>
>  #include <asm/stage2_pgtable.h>
>  
> +int kvm_share_hyp(void *from, void *to);
>  int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot);
>  int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size,
>  			   void __iomem **kaddr,
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index c33d8c073820..f2e74635332b 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -146,7 +146,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>  	if (ret)
>  		return ret;
>  
> -	ret = create_hyp_mappings(kvm, kvm + 1, PAGE_HYP);
> +	ret = kvm_share_hyp(kvm, kvm + 1);
>  	if (ret)
>  		goto out_free_stage2_pgd;
>  
> @@ -341,7 +341,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>  	if (err)
>  		return err;
>  
> -	return create_hyp_mappings(vcpu, vcpu + 1, PAGE_HYP);
> +	return kvm_share_hyp(vcpu, vcpu + 1);
>  }
>  
>  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
> @@ -623,8 +623,7 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
>  
>  		sve_end = vcpu->arch.sve_state + vcpu_sve_state_size(vcpu);
>  
> -		ret = create_hyp_mappings(vcpu->arch.sve_state, sve_end,
> -					  PAGE_HYP);
> +		ret = kvm_share_hyp(vcpu->arch.sve_state, sve_end);
>  		if (ret)
>  			return ret;
>  	}
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index 62c0d78da7be..2fe1128d9f3d 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -35,11 +35,11 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
>  	 * Make sure the host task thread flags and fpsimd state are
>  	 * visible to hyp:
>  	 */
> -	ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP);
> +	ret = kvm_share_hyp(ti, ti + 1);
>  	if (ret)
>  		goto error;
>  
> -	ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP);
> +	ret = kvm_share_hyp(fpsimd, fpsimd + 1);
>  	if (ret)
>  		goto error;
>  
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 1a94a7ca48f2..f80673e863ac 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -296,6 +296,17 @@ static int pkvm_share_hyp(phys_addr_t start, phys_addr_t end)
>  	return 0;
>  }
>  
> +int kvm_share_hyp(void *from, void *to)
> +{
> +	if (is_kernel_in_hyp_mode())
> +		return 0;
> +
> +	if (kvm_host_owns_hyp_mappings())
> +		return create_hyp_mappings(from, to, PAGE_HYP);

Not directly related to this code, but it looks to me that
kvm_host_owns_hyp_mappings() really ought to check for
is_kernel_in_hyp_mode() on its own. VHE really deals with its own
mappings, and create_hyp_mappings() already has a check to do nothing
on VHE.

> +
> +	return pkvm_share_hyp(kvm_kaddr_to_phys(from), kvm_kaddr_to_phys(to));
> +}
> +
>  /**
>   * create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode
>   * @from:	The virtual kernel start address of the range
> @@ -316,12 +327,8 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot)
>  	if (is_kernel_in_hyp_mode())
>  		return 0;
>  
> -	if (!kvm_host_owns_hyp_mappings()) {
> -		if (WARN_ON(prot != PAGE_HYP))
> -			return -EPERM;
> -		return pkvm_share_hyp(kvm_kaddr_to_phys(from),
> -				      kvm_kaddr_to_phys(to));
> -	}
> +	if (WARN_ON(!kvm_host_owns_hyp_mappings()))
> +		return -EPERM;

Do we really need this? Can't we just verify that all the code paths
to create_hyp_mappings() check for kvm_host_owns_hyp_mappings()?

At the very least, make this a VM_BUG_ON() so that this is limited to
debug. Yes, I'm quickly developing a WARN_ON()-phobia.

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ