lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 3 Aug 2021 12:40:35 +0100
From:   Will Deacon <will@...nel.org>
To:     Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>
Cc:     linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
        linux-kernel@...r.kernel.org, maz@...nel.org,
        catalin.marinas@....com, james.morse@....com,
        julien.thierry.kdev@...il.com, suzuki.poulose@....com,
        jean-philippe@...aro.org, Alexandru.Elisei@....com,
        qperret@...gle.com, linuxarm@...wei.com
Subject: Re: [PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU schedule
 out

On Thu, Jul 29, 2021 at 11:40:09AM +0100, Shameer Kolothum wrote:
> Like ASID allocator, we copy the active_vmids into the
> reserved_vmids on a rollover. But it's unlikely that
> every CPU will have a vCPU as current task and we may
> end up unnecessarily reserving the VMID space.
> 
> Hence, clear active_vmids when scheduling out a vCPU.
> 
> Suggested-by: Will Deacon <will@...nel.org>
> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 1 +
>  arch/arm64/kvm/arm.c              | 1 +
>  arch/arm64/kvm/vmid.c             | 6 ++++++
>  3 files changed, 8 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index bb993bce1363..d93141cb8d16 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -687,6 +687,7 @@ extern unsigned int kvm_arm_vmid_bits;
>  int kvm_arm_vmid_alloc_init(void);
>  void kvm_arm_vmid_alloc_free(void);
>  void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid);
> +void kvm_arm_vmid_clear_active(void);
>  
>  static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)
>  {
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 077e55a511a9..b134a1b89c84 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -435,6 +435,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>  	kvm_timer_vcpu_put(vcpu);
>  	kvm_vgic_put(vcpu);
>  	kvm_vcpu_pmu_restore_host(vcpu);
> +	kvm_arm_vmid_clear_active();
>  
>  	vcpu->cpu = -1;
>  }
> diff --git a/arch/arm64/kvm/vmid.c b/arch/arm64/kvm/vmid.c
> index 5584e84aed95..5fd51f5445c1 100644
> --- a/arch/arm64/kvm/vmid.c
> +++ b/arch/arm64/kvm/vmid.c
> @@ -116,6 +116,12 @@ static u64 new_vmid(struct kvm_vmid *kvm_vmid)
>  	return idx2vmid(vmid) | generation;
>  }
>  
> +/* Call with preemption disabled */
> +void kvm_arm_vmid_clear_active(void)
> +{
> +	atomic64_set(this_cpu_ptr(&active_vmids), 0);
> +}

I think this is very broken, as it will force everybody to take the
slow-path when they see an active_vmid of 0.

It also doesn't solve the issue I mentioned before, as an active_vmid of 0
means that the reserved vmid is preserved.

Needs more thought...

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ