lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 3 Jul 2017 10:03:13 +0200
From:   Christoffer Dall <cdall@...aro.org>
To:     Alexander Graf <agraf@...e.de>
Cc:     kvmarm@...ts.cs.columbia.edu, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org
Subject: Re: [PATCH] KVM: arm/arm64: Handle hva aging while destroying the vm

Hi Alex,

On Fri, Jun 23, 2017 at 05:21:59PM +0200, Alexander Graf wrote:
> If we want to age an HVA while the VM is getting destroyed, we have a
> tiny race window during which we may end up dereferencing an invalid
> kvm->arch.pgd value.
> 
>    CPU0               CPU1
> 
>    kvm_age_hva()
>                       kvm_mmu_notifier_release()
>                       kvm_arch_flush_shadow_all()
>                       kvm_free_stage2_pgd()
>                       <grab mmu_lock>
>    stage2_get_pmd()
>    <wait for mmu_lock>
>                       set kvm->arch.pgd = 0
>                       <free mmu_lock>
>    <grab mmu_lock>
>    stage2_get_pud()
>    <access kvm->arch.pgd>
>    <use incorrect value>

I don't think this sequence, can happen, but I think kvm_age_hva() can
be called with the mmu_lock held and kvm->pgd already being NULL.

Is that possible for the mmu notifiers to be calling clear(_flush)_young
while also calling notifier_release?

If so, the patch below looks good to me.

Thanks,
-Christoffer


> 
> This patch adds a check for that case.
> 
> Signed-off-by: Alexander Graf <agraf@...e.de>
> ---
>  virt/kvm/arm/mmu.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index f2d5b6c..227931f 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -861,6 +861,10 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
>  	pgd_t *pgd;
>  	pud_t *pud;
>  
> +	/* Do we clash with kvm_free_stage2_pgd()? */
> +	if (!kvm->arch.pgd)
> +		return NULL;
> +
>  	pgd = kvm->arch.pgd + stage2_pgd_index(addr);
>  	if (WARN_ON(stage2_pgd_none(*pgd))) {
>  		if (!cache)
> -- 
> 1.8.5.6
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@...ts.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ