[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201205004057.32199-6-paulmck@kernel.org>
Date: Fri, 4 Dec 2020 16:40:57 -0800
From: paulmck@...nel.org
To: rcu@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, kernel-team@...com, mingo@...nel.org,
jiangshanlai@...il.com, akpm@...ux-foundation.org,
mathieu.desnoyers@...icios.com, josh@...htriplett.org,
tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
oleg@...hat.com, joel@...lfernandes.org,
"Paul E. McKenney" <paulmck@...nel.org>,
Ming Lei <ming.lei@...hat.com>, Jens Axboe <axboe@...nel.dk>
Subject: [PATCH sl-b 6/6] percpu_ref: Print stack trace upon reference-count underflow
From: "Paul E. McKenney" <paulmck@...nel.org>
In some cases, the allocator return address is in a common function, so
that more information is desired. For example, percpu_ref reference-count
underflow happens in an RCU callback function having access only
to a block of memory that is always allocated in percpu_ref_init().
This information is unhelpful.
This commit therefore causes the percpu_ref_switch_to_atomic_rcu()
function to use the new kmem_last_alloc_stack() function to collect
and print a stack trace upon reference-count underflow. This requires
the kernel use the slub allocator and be built with CONFIG_STACKTRACE=y.
As always, slub debugging must be enabled one way or another, for example,
by booting with the "slub_debug=U" kernel boot parameter.
Cc: Ming Lei <ming.lei@...hat.com>
Cc: Jens Axboe <axboe@...nel.dk>
Reported-by: Andrii Nakryiko <andrii@...nel.org>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
---
lib/percpu-refcount.c | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index 8c7b21a0..ebdfa47 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
@@ -169,8 +169,6 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu)
struct percpu_ref *ref = data->ref;
unsigned long __percpu *percpu_count = percpu_count_ptr(ref);
unsigned long count = 0;
- void *allocaddr;
- const char *allocerr;
int cpu;
for_each_possible_cpu(cpu)
@@ -194,14 +192,26 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu)
atomic_long_add((long)count - PERCPU_COUNT_BIAS, &data->count);
if (atomic_long_read(&data->count) <= 0) {
- allocaddr = kmem_last_alloc(data);
+ void *allocaddr;
+ const char *allocerr;
+ void *allocstack[8];
+ int i;
+
+ allocaddr = kmem_last_alloc_stack(data, allocstack, ARRAY_SIZE(allocstack));
allocerr = kmem_last_alloc_errstring(allocaddr);
- if (allocerr)
+ if (allocerr) {
WARN_ONCE(1, "percpu ref (%ps) <= 0 (%ld) after switching to atomic (%s)",
data->release, atomic_long_read(&data->count), allocerr);
- else
- WARN_ONCE(1, "percpu ref (%ps) <= 0 (%ld) after switching to atomic (allocated at %pS)",
- data->release, atomic_long_read(&data->count), allocaddr);
+ } else {
+ pr_err("percpu ref (%ps) <= 0 (%ld) after switching to atomic (allocated at %pS)\n",
+ data->release, atomic_long_read(&data->count), allocaddr);
+ for (i = 0; i < ARRAY_SIZE(allocstack); i++) {
+ if (!allocstack[i])
+ break;
+ pr_err("\t%pS\n", allocstack[i]);
+ }
+ WARN_ON_ONCE(1);
+ }
}
/* @ref is viewed as dead on all CPUs, send out switch confirmation */
--
2.9.5
Powered by blists - more mailing lists