[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211122121844.867-1-shameerali.kolothum.thodi@huawei.com>
Date: Mon, 22 Nov 2021 12:18:40 +0000
From: Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>
To: <linux-arm-kernel@...ts.infradead.org>,
<kvmarm@...ts.cs.columbia.edu>, <linux-kernel@...r.kernel.org>
CC: <maz@...nel.org>, <will@...nel.org>, <catalin.marinas@....com>,
<james.morse@....com>, <julien.thierry.kdev@...il.com>,
<suzuki.poulose@....com>, <jean-philippe@...aro.org>,
<Alexandru.Elisei@....com>, <qperret@...gle.com>,
<jonathan.cameron@...wei.com>, <linuxarm@...wei.com>
Subject: [PATCH v4 0/4] kvm/arm: New VMID allocator based on asid
Changes from v3:
- Main change is in patch #4, where the VMID is now set to an
invalid one on vCPU schedule out. Introduced an
INVALID_ACTIVE_VMID which is basically a VMID 0 with generation 1.
Since the basic allocator algorithm reserves vmid #0, it is never
used as an active VMID. This (hopefully) will fix the issue of
unnecessarily reserving VMID space with active_vmids when those
VMs are no longer active[0] and at the same time address the
problem noted in v3 wherein everything ends up in slow-path[1].
Testing:
-Run with VMID bit set to 4 and maxcpus to 8 on D06. The test
involves running concurrently 50 guests with 4 vCPUs. Each
guest will then execute hackbench 5 times before exiting.
No crash was observed for a 4-day continuous run.
The latest branch is here,
https://github.com/hisilicon/kernel-dev/tree/private-v5.16-rc1-vmid-v4
-TLA+ model. Modified the asidalloc model to incorporate the new
VMID algo. The main differences are,
-flush_tlb_all() instead of local_tlb_flush_all() on rollover.
-Introduced INVALID_VMID and vCPU Sched Out logic.
-No CnP (Removed UniqueASIDAllCPUs & UniqueASIDActiveTask invariants).
-Removed UniqueVMIDPerCPU invariant for now as it looks like
because of the speculative fetching with flush_tlb_all() there
is a small window where this gets triggered. If I change the
logic back to local_flush_tlb_all(), UniqueVMIDPerCPU seems to
be fine. With my limited knowledge on TLA+ model, it is not
clear to me whether this is a problem with the above logic
or the VMID model implementation. Really appreciate any help
with the model.
The initial VMID TLA+ model is here,
https://github.com/shamiali2008/kernel-tla/tree/private-vmidalloc-v1
Please take a look and let me know.
Thanks,
Shameer
[0] https://lore.kernel.org/kvmarm/20210721160614.GC11003@willie-the-truck/
[1] https://lore.kernel.org/kvmarm/20210803114034.GB30853@willie-the-truck/
History:
--------
v2 --> v3
-Dropped adding a new static key and cpufeature for retrieving
supported VMID bits. Instead, we now make use of the
kvm_arm_vmid_bits variable (patch #2).
-Since we expect less frequent rollover in the case of VMIDs,
the TLB invalidation is now broadcasted on rollover instead
of keeping per CPU flush_pending info and issuing a local
context flush.
-Clear active_vmids on vCPU schedule out to avoid unnecessarily
reserving the VMID space(patch #3).
-I have kept the struct kvm_vmid as it is for now(instead of a
typedef as suggested), as we may soon add another variable to
it when we introduce Pinned KVM VMID support.
RFCv1 --> v2
-Dropped "pinned VMID" support for now.
-Dropped RFC tag.
RFCv1
https://lore.kernel.org/kvmarm/20210506165232.1969-1-shameerali.kolothum.thodi@huawei.com/
Julien Grall (1):
KVM: arm64: Align the VMID allocation with the arm64 ASID
Shameer Kolothum (3):
KVM: arm64: Introduce a new VMID allocator for KVM
KVM: arm64: Make VMID bits accessible outside of allocator
KVM: arm64: Make active_vmids invalid on vCPU schedule out
arch/arm64/include/asm/kvm_host.h | 10 +-
arch/arm64/include/asm/kvm_mmu.h | 4 +-
arch/arm64/kernel/image-vars.h | 3 +
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/arm.c | 106 +++-----------
arch/arm64/kvm/hyp/nvhe/mem_protect.c | 3 +-
arch/arm64/kvm/mmu.c | 1 -
arch/arm64/kvm/vmid.c | 196 ++++++++++++++++++++++++++
8 files changed, 228 insertions(+), 97 deletions(-)
create mode 100644 arch/arm64/kvm/vmid.c
--
2.17.1
Powered by blists - more mailing lists