[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251030135652.63837-1-luxu.kernel@bytedance.com>
Date: Thu, 30 Oct 2025 21:56:48 +0800
From: Xu Lu <luxu.kernel@...edance.com>
To: pjw@...nel.org,
	palmer@...belt.com,
	aou@...s.berkeley.edu,
	alex@...ti.fr,
	apatel@...tanamicro.com,
	guoren@...nel.org
Cc: linux-riscv@...ts.infradead.org,
	linux-kernel@...r.kernel.org,
	Xu Lu <luxu.kernel@...edance.com>
Subject: [RFC PATCH v1 0/4] riscv: mm: Defer tlb flush to context_switch
When need to flush tlb of a remote cpu, there is no need to send an IPI
if the target cpu is not using the asid we want to flush. Instead, we
can cache the tlb flush info in percpu buffer, and defer the tlb flush
to the next context_switch.
This reduces the number of IPI due to tlb flush:
* ltp - mmapstress01
Before: ~108k
After: ~46k
Future plan in the next version:
- This patch series reduces IPI by deferring tlb flush to
context_switch. It does not clear the mm_cpumask of target mm_struct. In
the next version, I will apply a threshold to the number of ASIDs
maintained by each cpu's tlb. Once the threshold is exceeded, ASID that
has not been used for the longest time will be flushed out. And current
cpu will be cleared in the mm_cpumask.
Thanks in advance for your comments.
Xu Lu (4):
  riscv: mm: Introduce percpu loaded_asid
  riscv: mm: Introduce percpu tlb flush queue
  riscv: mm: Enqueue tlbflush info if task is not running on target cpu
  riscv: mm: Perform tlb flush during context_switch
 arch/riscv/include/asm/mmu_context.h |  1 +
 arch/riscv/include/asm/tlbflush.h    |  4 ++
 arch/riscv/mm/context.c              | 10 ++++
 arch/riscv/mm/tlbflush.c             | 76 +++++++++++++++++++++++++++-
 4 files changed, 90 insertions(+), 1 deletion(-)
-- 
2.20.1
Powered by blists - more mailing lists
 
