[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100204194047.GE5733@kernel.dk>
Date: Thu, 4 Feb 2010 20:40:47 +0100
From: Jens Axboe <jens.axboe@...cle.com>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: Ingo Molnar <mingo@...e.hu>, LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Paul Mackerras <paulus@...ba.org>,
Hitoshi Mitake <mitake@....info.waseda.ac.jp>,
Li Zefan <lizf@...fujitsu.com>,
Lai Jiangshan <laijs@...fujitsu.com>,
Masami Hiramatsu <mhiramat@...hat.com>
Subject: Re: [RFC GIT PULL] perf/trace/lock optimization/scalability
improvements
On Wed, Feb 03 2010, Frederic Weisbecker wrote:
> Ok, thanks a lot, the fact you can test on a 64 threads box is critically
> helpful.
>
> I also wonder what happens after this patch applied:
>
> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
> index 98fd360..254b3d4 100644
> --- a/kernel/perf_event.c
> +++ b/kernel/perf_event.c
> @@ -3094,7 +3094,8 @@ static u32 perf_event_tid(struct perf_event *event, struct task_struct *p)
> if (event->parent)
> event = event->parent;
>
> - return task_pid_nr_ns(p, event->ns);
> + return p->pid;
> }
>
> In my box it has increased the speed from 2x this patchset.
Doesn't seem to change anything, same runtime for a ls.
> I wonder if the tool becomes usable for you with that.
> Otherwise, it means we have other things to fix, and
> the result of:
>
> perf record -g -f perf lock record sleep 6
> perf report
>
> would be very nice to have.
root@...alem:/dev/shm # perf record -g -f perf lock record sleep 6
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 446.208 MB perf.data (~19495127 samples) ]
[ perf record: Woken up 9 times to write data ]
[ perf record: Captured and wrote 1.135 MB perf.data (~49609 samples) ]
It's huuuge. Thankfully the output isn't so big, I've attached it.
--
Jens Axboe
View attachment "perf-lock-report.txt" of type "text/plain" (34652 bytes)
Powered by blists - more mailing lists