[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <42bcdd9100bf4c63b79d2b72bd6db951@huawei.com>
Date: Mon, 9 Feb 2026 13:14:07 +0000
From: "yezhenyu (A)" <yezhenyu2@...wei.com>
To: "rananta@...gle.com" <rananta@...gle.com>, "will@...nel.org"
<will@...nel.org>, "maz@...nel.org" <maz@...nel.org>,
"oliver.upton@...ux.dev" <oliver.upton@...ux.dev>, "catalin.marinas@....com"
<catalin.marinas@....com>, "dmatlack@...gle.com" <dmatlack@...gle.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvmarm@...ts.linux.dev" <kvmarm@...ts.linux.dev>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, zhengchuan <zhengchuan@...wei.com>,
Xiexiangyou <xiexiangyou@...wei.com>, "guoqixin (A)" <guoqixin2@...wei.com>,
"Mawen (Wayne)" <wayne.ma@...wei.com>
Subject: [RFC][PATCH] arm64: tlb: call kvm_call_hyp once during
kvm_tlb_flush_vmid_range
>From 9982be89f55bd99b3683337223284f0011ed248e Mon Sep 17 00:00:00 2001
From: eillon <yezhenyu2@...wei.com>
Date: Mon, 9 Feb 2026 19:48:46 +0800
Subject: [RFC][PATCH v1] arm64: tlb: call kvm_call_hyp once during
kvm_tlb_flush_vmid_range
The kvm_tlb_flush_vmid_range() function is performance-critical
during live migration, but there is a while loop when the system
support flush tlb by range when the size is larger than MAX_TLBI_RANGE_PAGES.
This results in frequent entry to kvm_call_hyp() and then a large
amount of time is spent in kvm_clear_dirty_log_protect() during
migration(more than 50%). So, when the address range is large than
MAX_TLBI_RANGE_PAGES, directly call __kvm_tlb_flush_vmid to
optimize performance.
---
arch/arm64/kvm/hyp/pgtable.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 874244df7..9da22b882 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -675,21 +675,19 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt)
void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu,
phys_addr_t addr, size_t size)
{
- unsigned long pages, inval_pages;
+ unsigned long pages = size >> PAGE_SHIFT;
- if (!system_supports_tlb_range()) {
+ /*
+ * This function is performance-critical during live migration;
+ * thus, when the address range is large than MAX_TLBI_RANGE_PAGES,
+ * directly call __kvm_tlb_flush_vmid to optimize performance.
+ */
+ if (!system_supports_tlb_range() || pages > MAX_TLBI_RANGE_PAGES) {
kvm_call_hyp(__kvm_tlb_flush_vmid, mmu);
return;
}
- pages = size >> PAGE_SHIFT;
- while (pages > 0) {
- inval_pages = min(pages, MAX_TLBI_RANGE_PAGES);
- kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages);
-
- addr += inval_pages << PAGE_SHIFT;
- pages -= inval_pages;
- }
+ kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, pages);
}
#define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt))
--
2.43.0
Powered by blists - more mailing lists