lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 3 Aug 2021 12:38:56 +0100
From:   Will Deacon <will@...nel.org>
To:     Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>
Cc:     linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
        linux-kernel@...r.kernel.org, maz@...nel.org,
        catalin.marinas@....com, james.morse@....com,
        julien.thierry.kdev@...il.com, suzuki.poulose@....com,
        jean-philippe@...aro.org, Alexandru.Elisei@....com,
        qperret@...gle.com, linuxarm@...wei.com
Subject: Re: [PATCH v3 1/4] KVM: arm64: Introduce a new VMID allocator for KVM

On Thu, Jul 29, 2021 at 11:40:06AM +0100, Shameer Kolothum wrote:
> A new VMID allocator for arm64 KVM use. This is based on
> arm64 ASID allocator algorithm.
> 
> One major deviation from the ASID allocator is the way we
> flush the context. Unlike ASID allocator, we expect less
> frequent rollover in the case of VMIDs. Hence, instead of
> marking the CPU as flush_pending and issuing a local context
> invalidation on the next context switch, we broadcast TLB
> flush + I-cache invalidation over the inner shareable domain
> on rollover.
> 
> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>
> ---

[...]

> +void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
> +{
> +	unsigned long flags;
> +	unsigned int cpu;
> +	u64 vmid, old_active_vmid;
> +
> +	vmid = atomic64_read(&kvm_vmid->id);
> +
> +	/*
> +	 * Please refer comments in check_and_switch_context() in
> +	 * arch/arm64/mm/context.c.
> +	 */
> +	old_active_vmid = atomic64_read(this_cpu_ptr(&active_vmids));
> +	if (old_active_vmid && vmid_gen_match(vmid) &&
> +	    atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_vmids),
> +				     old_active_vmid, vmid))
> +		return;
> +
> +	raw_spin_lock_irqsave(&cpu_vmid_lock, flags);
> +
> +	/* Check that our VMID belongs to the current generation. */
> +	vmid = atomic64_read(&kvm_vmid->id);
> +	if (!vmid_gen_match(vmid)) {
> +		vmid = new_vmid(kvm_vmid);
> +		atomic64_set(&kvm_vmid->id, vmid);

new_vmid() can just set kvm_vmid->id directly

> +	}
> +
> +	cpu = smp_processor_id();

Why?

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ