[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aQMliHDRpejqwOro@atcsi01.andestech.com>
Date: Thu, 30 Oct 2025 16:44:56 +0800
From: Mina Chou <minachou@...estech.com>
To: Anup Patel <apatel@...tanamicro.com>
CC: <anup@...infault.org>, <atish.patra@...ux.dev>, <pjw@...nel.org>,
        <palmer@...belt.com>, <aou@...s.berkeley.edu>, <alex@...ti.fr>,
        <kvm@...r.kernel.org>, <kvm-riscv@...ts.infradead.org>,
        <linux-riscv@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
        <tim609@...estech.com>, <ben717@...estech.com>, <az70021@...il.com>
Subject: Re: [PATCH v2] RISC-V: KVM: flush VS-stage TLB after VCPU migration
 to prevent stale entries
Hi Anup,
> 
> Here's what the non-normative text says about HFENCE.GVMA ...
> 
> "Conceptually, an implementation might contain two address-translation
> caches: one that
> maps guest virtual addresses to guest physical addresses, and another
> that maps guest
> physical addresses to supervisor physical addresses. HFENCE.GVMA need
> not flush the
> former cache, but it must flush entries from the latter cache that
> match the HFENCE.GVMA???s
> address and VMID arguments."
> "More commonly, implementations contain address-translation caches
> that map guest virtual
> addresses directly to supervisor physical addresses, removing a level
> of indirection. For such
> implementations, any entry whose guest virtual address maps to a guest
> physical address that
> matches the HFENCE.GVMA???s address and VMID arguments must be flushed.
> Selectively
> flushing entries in this fashion requires tagging them with the guest
> physical address, which is
> costly, and so a common technique is to flush all entries that match
> the HFENCE.GVMA???s
> VMID argument, regardless of the address argument."
> 
> This means ...
> 
> For implementations (most common) which have TLBs caching
> guest virtual address to supervisor physical address, the
> kvm_riscv_local_hfence_gvma_vmid_all() is sufficient upon
> VCPU migrating to a different host CPU.
> 
> For implementations (relatively uncommon) which have TLBs
> caching guest virtual address to guest physical address, the
> HFENCE.GVMA will not touch guest virtual address to guest
> physical address mapping and KVM must explicitly sanitize
> VS-stage mappings using HFENCE.VVMA (like this patch)
> when migrating VCPU to a different host CPU.
> 
> We should not penalize all implementations by explicitly calling
> kvm_riscv_local_hfence_vvma_all()  rather this should be only
> done on implementations where it is required using a static jump.
> One possible way of detecting whether the underlying implementation
> needs explicit HFENCE.VVMA upon VCPU is to use marchid,
> mimpid, and mvendorid. Another way is to use implementation
> specific CPU compatible strings.
> 
> Regards,
> Anup
> 
> 
> 
Thanks for the detailed explanation! Our implementation does require the
extra hfence.vvma, so we'll add a check to make sure it only runs on
the platforms that actually need it.
Thanks again for your feedback.
Best regards,
Mina
Powered by blists - more mailing lists
 
