[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <172052531830.2215.8399866231682397481.tip-bot2@tip-bot2>
Date: Tue, 09 Jul 2024 11:41:58 -0000
From: "tip-bot2 for Sebastian Andrzej Siewior" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Marco Elver <elver@...gle.com>, x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [tip: perf/core] perf: Shrink the size of the recursion counter.
The following commit has been merged into the perf/core branch of tip:
Commit-ID: 5af42f928f3ac555c228740fb4a92d05b19fdd49
Gitweb: https://git.kernel.org/tip/5af42f928f3ac555c228740fb4a92d05b19fdd49
Author: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
AuthorDate: Thu, 04 Jul 2024 19:03:38 +02:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Tue, 09 Jul 2024 13:26:35 +02:00
perf: Shrink the size of the recursion counter.
There are four recursion counter, one for each context. The type of the
counter is `int' but the counter is used as `bool' since it is only
incremented if zero.
The main goal here is to shrink the whole struct into 32bit int which
can later be added task_struct into an existing hole.
Reduce the type of the recursion counter to an unsigned char, keep the
increment/ decrement operation.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Tested-by: Marco Elver <elver@...gle.com>
Link: https://lore.kernel.org/r/20240704170424.1466941-5-bigeasy@linutronix.de
---
kernel/events/callchain.c | 2 +-
kernel/events/core.c | 2 +-
kernel/events/internal.h | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
index 1273be8..ad57944 100644
--- a/kernel/events/callchain.c
+++ b/kernel/events/callchain.c
@@ -29,7 +29,7 @@ static inline size_t perf_callchain_entry__sizeof(void)
sysctl_perf_event_max_contexts_per_stack));
}
-static DEFINE_PER_CPU(int, callchain_recursion[PERF_NR_CONTEXTS]);
+static DEFINE_PER_CPU(u8, callchain_recursion[PERF_NR_CONTEXTS]);
static atomic_t nr_callchain_events;
static DEFINE_MUTEX(callchain_mutex);
static struct callchain_cpus_entries *callchain_cpus_entries;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 73e1b02..53e2750 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -9765,7 +9765,7 @@ struct swevent_htable {
int hlist_refcount;
/* Recursion avoidance in each contexts */
- int recursion[PERF_NR_CONTEXTS];
+ u8 recursion[PERF_NR_CONTEXTS];
};
static DEFINE_PER_CPU(struct swevent_htable, swevent_htable);
diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index 386d21c..7f06b79 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -208,7 +208,7 @@ arch_perf_out_copy_user(void *dst, const void *src, unsigned long n)
DEFINE_OUTPUT_COPY(__output_copy_user, arch_perf_out_copy_user)
-static inline int get_recursion_context(int *recursion)
+static inline int get_recursion_context(u8 *recursion)
{
unsigned char rctx = interrupt_context_level();
@@ -221,7 +221,7 @@ static inline int get_recursion_context(int *recursion)
return rctx;
}
-static inline void put_recursion_context(int *recursion, int rctx)
+static inline void put_recursion_context(u8 *recursion, int rctx)
{
barrier();
recursion[rctx]--;
Powered by blists - more mailing lists