[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1473759914-17003-9-git-send-email-byungchul.park@lge.com>
Date: Tue, 13 Sep 2016 18:45:07 +0900
From: Byungchul Park <byungchul.park@....com>
To: peterz@...radead.org, mingo@...nel.org
Cc: tglx@...utronix.de, walken@...gle.com, boqun.feng@...il.com,
kirill@...temov.name, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, iamjoonsoo.kim@....com,
akpm@...ux-foundation.org, npiggin@...il.com
Subject: [PATCH v3 08/15] lockdep: Make crossrelease use save_stack_trace_fast()
Currently crossrelease feature uses save_stack_trace() to save
backtrace. However, it has much overhead. So this patch makes it
use save_stack_trace_norm() instead, which has smaller overhead.
Signed-off-by: Byungchul Park <byungchul.park@....com>
---
kernel/locking/lockdep.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 2c8b2c1..fbd07ee 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4768,7 +4768,7 @@ static void add_plock(struct held_lock *hlock, unsigned int prev_gen_id,
plock->trace.max_entries = MAX_PLOCK_TRACE_ENTRIES;
plock->trace.entries = plock->trace_entries;
plock->trace.skip = 3;
- save_stack_trace(&plock->trace);
+ save_stack_trace_fast(&plock->trace);
}
}
--
1.9.1
Powered by blists - more mailing lists