lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z8JOvMx6iLexT3pK@google.com>
Date: Fri, 28 Feb 2025 16:03:08 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org, 
	Jim Mattson <jmattson@...gle.com>
Subject: Re: [RFC PATCH 01/13] KVM: nSVM: Track the ASID per-VMCB

+Jim, for his input on VPIDs.

On Wed, Feb 05, 2025, Yosry Ahmed wrote:
> The ASID is currently tracked per-vCPU, because the same ASID is used by
> L1 and L2. That ASID is flushed on every transition between L1 and L2.
> 
> Track the ASID separately for each VMCB (similar to the
> asid_generation), giving L2 a separate ASID. This is in preparation for
> doing fine-grained TLB flushes on nested transitions instead of
> unconditional full flushes.

After having some time to think about this, rather than track ASIDs per VMCB, I
think we should converge on a single approach for nVMX (VPID) and nSVM (ASID).

Per **VM**, one VPID/ASID for L1, and one VPID/ASID for L2.

For SVM, the dynamic ASID crud is a holdover from KVM's support for CPUs that
don't support FLUSHBYASID, i.e. needed to purge the entire TLB in order to flush
guest mappings.  FLUSHBYASID was added in 2010, and AFAIK has been supported by
all AMD CPUs since.

KVM already mostly keeps the same ASID, except for when a vCPU is migrated, in
which case KVM assigns a new ASID.  I suspect that following VMX's lead and
simply doing a TLB flush in this situation would be an improvement for modern
CPUs, as it would flush the entries that need to be flushed, and not pollute the
TLBs with stale, unused entries.

Using a static per-VM ASID would also allow using broadcast invalidations[*],
would simplify the SVM code base, and I think/hope would allow us to move much
of the TLB flushing logic, e.g. for task migration, to common code.

For VPIDs, maybe it's because it's Friday afternoon, but for the life of me I
can't think of any reason why KVM needs to assign VPIDs per vCPU.  Especially
since KVM is ridiculously conservative and flushes _all_ EPT/VPID contexts when
running a different vCPU on a pCPU (which I suspect we can trim down?).

Am I forgetting something?

[*] https://lore.kernel.org/all/Z8HdBg3wj8M7a4ts@google.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ