[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20091216194117.GD5211@nowhere>
Date: Wed, 16 Dec 2009 20:41:20 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Hitoshi Mitake <mitake@....info.waseda.ac.jp>
Cc: mingo@...e.hu, linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Paul Mackerras <paulus@...ba.org>
Subject: Re: [PATCH][RFC] perf lock: Distribute numerical IDs for each lock
instances
On Mon, Dec 14, 2009 at 11:44:53PM +0900, Hitoshi Mitake wrote:
> On Mon, Dec 14, 2009 at 22:30, Frederic Weisbecker <fweisbec@...il.com> wrote:
> > So if I understand well, this maps each lockdep_map
> > into a unique index, right?
>
> There's a slightly difference. This patch maps each lock instances
> (spinlock_t, rwlock_t, etc) into a unique index.
Yeah.
> The usecase I assumed is (for example) that
> dividing copying name of lock instances to userspace from lock trace events.
>
> I think that copying name of lock at every trace event time is not efficient.
> For example, ID <-> name table can be made in my way.
> So each lock events only have to output it's ID.
> Then, perf lock reads the table from file on debugfs.
> Finally perf lock can refer the table and obtain name of each lock.
> This may reduce the data transfer between kernel and userspace.
>
> But... you are right. This effect can be also obtained by hashlist.
> There's no requirement of implementing array.
> And optimization should be done after implementation.
> I'll back to coding of perf lock, sorry..
>
> # But I think that this is useful to measure the overhead of hashlist! :)
>
Ah I understand better. Indeed if we have such index:lock_name mapping
available from debugfs, the tracing path would be more efficient because
we'd only need to trace the index, no need to copy the name.
Actually that looks like a good idea.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists