[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1463661613-43970-1-git-send-email-soumya.p.n@hpe.com>
Date: Thu, 19 May 2016 18:10:13 +0530
From: Soumya PN <soumya.p.n@....com>
To: rostedt@...dmis.org, mingo@...hat.com
Cc: linux-kernel@...r.kernel.org, chandru.muthu@....com,
Soumya PN <soumya.p.n@....com>
Subject: [PATCH] ftrace: As disabling interrupts is costly and write_lock variant of tasklist_lock is not held from interrupt context it is not necessary to disable interrupts.
In ftrace.c inside the function alloc_retstack_tasklist()(which will be
invoked when function_graph tracing is on) the tasklist_lock is being
held as reader while iterating through list of threads. Here the lock
being held as reader with irqs disabled. The tasklist_lock is never
write_locked in interrupt context so it is safe to not disable interrupts
for the duration of read_lock in this block which, can be significant,
given the block of code iterates through all threads. Hence changing the
code to call read_lock() and read_unlock() instead of read_lock_irqsave()
and read_unlock_irqrestore(). Seen the same change is made in
"kernel/tracepoint.c" and "kernel/sched/core.c"files with commit ids
'commit 8063e41d2ffc ("tracing: Change syscall_*regfunc() to check
PF_KTHREAD and use for_each_process_thread()")'
'commit 3472eaa1f12e ("sched: normalize_rt_tasks(): Don't use _irqsave
for tasklist_lock, use task_rq_lock()")'
Done basic testing by enabling function graph tracing in x86_64.
Signed-off-by: Soumya PN <soumya.p.n@....com>
---
kernel/trace/ftrace.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index b1870fb..45bc72f 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5713,7 +5713,6 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
{
int i;
int ret = 0;
- unsigned long flags;
int start = 0, end = FTRACE_RETSTACK_ALLOC_SIZE;
struct task_struct *g, *t;
@@ -5728,8 +5727,7 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
goto free;
}
}
-
- read_lock_irqsave(&tasklist_lock, flags);
+ read_lock(&tasklist_lock);
do_each_thread(g, t) {
if (start == end) {
ret = -EAGAIN;
@@ -5747,7 +5745,7 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
} while_each_thread(g, t);
unlock:
- read_unlock_irqrestore(&tasklist_lock, flags);
+ read_unlock(&tasklist_lock);
free:
for (i = start; i < end; i++)
kfree(ret_stack_list[i]);
--
1.8.3.1
Powered by blists - more mailing lists