[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191025003648.af4216cbf71bf2d5e60d2932@kernel.org>
Date: Fri, 25 Oct 2019 00:36:48 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Jiri Olsa <jolsa@...hat.com>, Namhyung Kim <namhyung@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [BUGFIX PATCH 0/3] perf/probe: Fix available line bugs
Hi Arnaldo,
On Thu, 24 Oct 2019 10:43:25 -0300
Arnaldo Carvalho de Melo <acme@...nel.org> wrote:
> Em Thu, Oct 24, 2019 at 06:12:27PM +0900, Masami Hiramatsu escreveu:
> > Hi,
> >
> > Here are some bugfixes related to showing available line (--line/-L)
> > option. I found that gcc generated some subprogram DIE with only
> > range attribute but no entry_pc or low_pc attributes.
> > In that case, perf probe failed to show the available lines in that
> > subprogram (function). To fix that, I introduced some bugfixes to
> > handle such cases correctly.
>
> Thanks, applied, next time please provide concrete examples for things
> that don't work before and gets fixed with your patches, this way we can
> more easily reproduce your steps.
OK, I'll try, but I found that this kind of examples depend on
gcc optimization (like build option, gcc version etc.) So even if you
can not reproduce, don't be disappointed ;)
Anyway, for this series, I found that clear_tasks_mm_cpumask() caused
the (triple) issues.
So, without any patch, I got below error.
$ tools/perf/perf probe -k ../build-x86_64/vmlinux -L clear_tasks_mm_cpumask
Specified source line is not found.
Error: Failed to show lines.
Hmm, there should be something wrong... so I fixed it with [1/3].
(to find appropriate target function, you can use "eu-readelf -w vmlinux"
and find the subprogram which has no entry_pc nor low_pc but has ranges)
$ tools/perf/perf probe -k ../build-x86_64/vmlinux -L clear_tasks_mm_cpumask
<clear_tasks_mm_cpumask@...me/mhiramat/ksrc/mincs/work/linux/linux/kernel/cpu.c:
void clear_tasks_mm_cpumask(int cpu)
1 {
2 struct task_struct *p;
/*
* This function is called after the cpu is taken down and marke
* offline, so its not like new tasks will ever get this cpu set
* their mm mask. -- Peter Zijlstra
* Thus, we may use rcu_read_lock() here, instead of grabbing
* full-fledged tasklist_lock.
*/
11 WARN_ON(cpu_online(cpu));
12 rcu_read_lock();
13 for_each_process(p) {
14 struct task_struct *t;
/*
* Main thread might exit, but other threads may still h
* a valid mm. Find one.
*/
20 t = find_lock_task_mm(p);
21 if (!t)
continue;
cpumask_clear_cpu(cpu, mm_cpumask(t->mm));
task_unlock(t);
}
26 rcu_read_unlock();
27 }
OK! it looks working... but a bit wired. why line #0, #23 and #24 are not
available?
I investigated it and found a wrong logic and fixed it with [2/3]
$ tools/perf/perf probe -k ../build-x86_64/vmlinux -L clear_tasks_mm_cpumask:20
<clear_tasks_mm_cpumask@...me/mhiramat/ksrc/mincs/work/linux/linux/kernel/cpu.c:
20 t = find_lock_task_mm(p);
21 if (!t)
continue;
23 cpumask_clear_cpu(cpu, mm_cpumask(t->mm));
24 task_unlock(t);
}
26 rcu_read_unlock();
27 }
OK, it found line #23 and #24. And add [3/3]
$ tools/perf/perf probe -k ../build-x86_64/vmlinux -L clear_tasks_mm_cpumask
<clear_tasks_mm_cpumask@...me/mhiramat/ksrc/mincs/work/linux/linux/kernel/cpu.c:
0 void clear_tasks_mm_cpumask(int cpu)
1 {
2 struct task_struct *p;
OK, so now it works well :)
Thank you,
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists