[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240111060757.13563-1-byungchul@sk.com>
Date: Thu, 11 Jan 2024 15:07:50 +0900
From: Byungchul Park <byungchul@...com>
To: linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Cc: kernel_team@...ynix.com,
akpm@...ux-foundation.org,
ying.huang@...el.com,
namit@...are.com,
xhao@...ux.alibaba.com,
mgorman@...hsingularity.net,
hughd@...gle.com,
willy@...radead.org,
david@...hat.com,
peterz@...radead.org,
luto@...nel.org,
tglx@...utronix.de,
mingo@...hat.com,
bp@...en8.de,
dave.hansen@...ux.intel.com
Subject: [v5 0/7] Reduce TLB flushes by 94% by improving folio migration
Hi everyone,
While I'm working with CXL memory, I have been facing migration overhead
esp. TLB shootdown on promotion or demotion between different tiers.
Yeah.. most TLB shootdowns on migration through hinting fault can be
avoided thanks to Huang Ying's work, commit 4d4b6d66db ("mm,unmap: avoid
flushing TLB in batch if PTE is inaccessible"). See the following link:
https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
However, it's only for ones using hinting fault. I thought it'd be much
better if we have a general mechanism to reduce the number of TLB
flushes and TLB misses, that we can apply to any type of migration. I
tried it only for tiering migration for now tho.
I'm suggesting a mechanism to reduce TLB flushes by keeping source and
destination of folios participated in the migrations until all TLB
flushes required are done, only if those folios are not mapped with
write permission PTE entries at all. I worked Based on v6.7.
I saw the number of iTLB full flush was reduced by 94%, iTLB miss was
reduced by 45.5% and the total runtime was reduced by 3.5% with the
workload I tested with, XSBench. However, I believe that it would help
more with other ones or any real ones. It'd be appreciated to let me
know if I'm missing something.
Byungchul
---
Changes from v4:
1. Rebase on v6.7.
2. Fix build errors in arm64 that is doing nothing for TLB flush
but has CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH. (reported
by kernel test robot)
3. Don't use any page flag. So the system would give up migrc
mechanism more often but it's okay. The final improvement is
good enough.
4. Instead, optimize full TLB flush(arch_tlbbatch_flush()) by
avoiding redundant CPUs from TLB flush.
Changes from v3:
1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob,
migrc_enable. (feedbacked by Nadav)
2. Remove the optimization skipping CPUs that have already
performed TLB flushes needed by any reason when performing
TLB flushes by migrc because I can't tell the performance
difference between w/ the optimization and w/o that.
(feedbacked by Nadav)
3. Minimize arch-specific code. While at it, move all the migrc
declarations and inline functions from include/linux/mm.h to
mm/internal.h (feedbacked by Dave Hansen, Nadav)
4. Separate a part making migrc paused when the system is in
high memory pressure to another patch. (feedbacked by Nadav)
5. Rename:
a. arch_tlbbatch_clean() to arch_tlbbatch_clear(),
b. tlb_ubc_nowr to tlb_ubc_ro,
c. migrc_try_flush_free_folios() to migrc_flush_free_folios(),
d. migrc_stop to migrc_pause.
(feedbacked by Nadav)
6. Use ->lru list_head instead of introducing a new llist_head.
(feedbacked by Nadav)
7. Use non-atomic operations of page-flag when it's safe.
(feedbacked by Nadav)
8. Use stack instead of keeping a pointer of 'struct migrc_req'
in struct task, which is for manipulating it locally.
(feedbacked by Nadav)
9. Replace a lot of simple functions to inline functions placed
in a header, mm/internal.h. (feedbacked by Nadav)
10. Add additional sufficient comments. (feedbacked by Nadav)
11. Remove a lot of wrapper functions. (feedbacked by Nadav)
Changes from RFC v2:
1. Remove additional occupation in struct page. To do that,
unioned with lru field for migrc's list and added a page
flag. I know page flag is a thing that we don't like to add
but no choice because migrc should distinguish folios under
migrc's control from others. Instead, I force migrc to be
used only on 64 bit system to mitigate you guys from getting
angry.
2. Remove meaningless internal object allocator that I
introduced to minimize impact onto the system. However, a ton
of tests showed there was no difference.
3. Stop migrc from working when the system is in high memory
pressure like about to perform direct reclaim. At the
condition where the swap mechanism is heavily used, I found
the system suffered from regression without this control.
4. Exclude folios that pte_dirty() == true from migrc's interest
so that migrc can work simpler.
5. Combine several patches that work tightly coupled to one.
6. Add sufficient comments for better review.
7. Manage migrc's request in per-node manner (from globally).
8. Add TLB miss improvement in commit message.
9. Test with more CPUs(4 -> 16) to see bigger improvement.
Changes from RFC:
1. Fix a bug triggered when a destination folio at the previous
migration becomes a source folio at the next migration,
before the folio gets handled properly so that the folio can
play with another migration. There was inconsistency in the
folio's state. Fixed it.
2. Split the patch set into more pieces so that the folks can
review better. (Feedbacked by Nadav Amit)
3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic().
(Feedbacked by Nadav Amit)
4. Tried to add sufficient comments to explain the patch set
better. (Feedbacked by Nadav Amit)
Byungchul Park (7):
x86/tlb: Add APIs manipulating tlb batch's arch data
arm64: tlbflush: Add APIs manipulating tlb batch's arch data
mm/rmap: Recognize read-only TLB entries during batched TLB flush
mm: Separate move/undo doing on folio list from migrate_pages_batch()
mm: Add APIs to free a folio directly to the buddy bypassing pcp
mm: Defer TLB flush by keeping both src and dst folios at migration
mm: Pause migrc mechanism at high memory pressure
arch/arm64/include/asm/tlbflush.h | 19 ++
arch/x86/include/asm/tlbflush.h | 18 ++
arch/x86/mm/tlb.c | 7 +
include/linux/mm.h | 23 ++
include/linux/mmzone.h | 5 +
include/linux/sched.h | 7 +
mm/internal.h | 84 +++++++
mm/memory.c | 8 +
mm/migrate.c | 381 +++++++++++++++++++++++++-----
mm/page_alloc.c | 34 ++-
mm/rmap.c | 37 ++-
mm/swap.c | 7 +
12 files changed, 574 insertions(+), 56 deletions(-)
--
2.17.1
Powered by blists - more mailing lists