[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251024182628.68921-1-qq570070308@gmail.com>
Date: Sat, 25 Oct 2025 02:26:25 +0800
From: Xie Yuanbin <qq570070308@...il.com>
To: linux@...linux.org.uk,
mathieu.desnoyers@...icios.com,
paulmck@...nel.org,
pjw@...nel.org,
palmer@...belt.com,
aou@...s.berkeley.edu,
alex@...ti.fr,
hca@...ux.ibm.com,
gor@...ux.ibm.com,
agordeev@...ux.ibm.com,
borntraeger@...ux.ibm.com,
svens@...ux.ibm.com,
davem@...emloft.net,
andreas@...sler.com,
tglx@...utronix.de,
mingo@...hat.com,
bp@...en8.de,
dave.hansen@...ux.intel.com,
hpa@...or.com,
luto@...nel.org,
peterz@...radead.org,
acme@...nel.org,
namhyung@...nel.org,
mark.rutland@....com,
alexander.shishkin@...ux.intel.com,
jolsa@...nel.org,
irogers@...gle.com,
adrian.hunter@...el.com,
anna-maria@...utronix.de,
frederic@...nel.org,
juri.lelli@...hat.com,
vincent.guittot@...aro.org,
dietmar.eggemann@....com,
rostedt@...dmis.org,
bsegall@...gle.com,
mgorman@...e.de,
vschneid@...hat.com,
qq570070308@...il.com,
thuth@...hat.com,
riel@...riel.com,
akpm@...ux-foundation.org,
david@...hat.com,
lorenzo.stoakes@...cle.com,
segher@...nel.crashing.org,
ryan.roberts@....com,
max.kellermann@...os.com,
urezki@...il.com,
nysal@...ux.ibm.com
Cc: x86@...nel.org,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-riscv@...ts.infradead.org,
linux-s390@...r.kernel.org,
sparclinux@...r.kernel.org,
linux-perf-users@...r.kernel.org,
will@...nel.org
Subject: [PATCH 0/3] Optimize code generation during context switching
The purpose of this series of patches is to optimize the performance of
context switching. It does not change the code logic, but only modifies
the inline attributes of some functions.
The original reason for writing this patch is that, when debugging a
schedule performance problem, I discovered that the finish_task_switch
function was not inlined, even in the O2 level optimization. This may
affect performance for the following reasons:
1. It is in the context switching code, and is called frequently.
2. Because of the modern CPU mitigations for vulnerabilities, inside
switch_mm, the instruction pipeline and cache may be cleared, and the
branch and cache miss may increase. finish_task_switch is right after
that, so this may cause greater performance degradation.
3. The __schedule function has __sched attribute, which makes it be
placed in the ".sched.text" section, while finish_task_switch does not,
which causes their distance to be very far in binary, aggravating the
above performance degradation.
I also noticed that on x86, enter_lazy_tlb func is not inlined. It's very
short, and since the cpu_tlbstate and cpu_tlbstate_shared variables are
global, it can be completely inline. In fact, the implementation of this
function on other architectures is inline.
This series of patches mainly does the following things:
1. Change enter_lazy_tlb to inline on x86.
2. Let the finish_task_switch function be called inline during context
switching.
3. Set the subfunctions called by finish_task_switch to be inline:
When finish_task_switch is changed to an inline func, the number of calls
to the subfunctions(which called by finish_task_switch) in this
translation unit increases due to the inline expansion of the
finish_task_switch function.
For example, the finish_lock_switch function originally had only one
calling point in core.o (in finish_task_switch func), but because the
finish_task_switch was inlined, the calling points become two.
Due to compiler optimization strategies,
these subfunctions may transition from inline functions to non inline
functions, which can actually lead to performance degradation.
So I modify some subfunctions of finish_task_stwitch to be always inline
to prevent degradation.
These functions are either very short or are only called once in the
entire kernel, so they do not have a big impact on the size.
This series of patches does not find any impact on the size of the
bzImage image (using Os to build).
Xie Yuanbin (3):
arch/arm/include/asm/mmu_context.h | 6 +++++-
arch/riscv/include/asm/sync_core.h | 2 +-
arch/s390/include/asm/mmu_context.h | 6 +++++-
arch/sparc/include/asm/mmu_context_64.h | 6 +++++-
arch/x86/include/asm/mmu_context.h | 22 +++++++++++++++++++++-
arch/x86/include/asm/sync_core.h | 2 +-
arch/x86/mm/tlb.c | 21 ---------------------
include/linux/perf_event.h | 2 +-
include/linux/sched/mm.h | 10 +++++-----
include/linux/tick.h | 4 ++--
include/linux/vtime.h | 8 ++++----
kernel/sched/core.c | 20 +++++++++++++-------
12 files changed, 63 insertions(+), 46 deletions(-)
--
2.51.0
Powered by blists - more mailing lists