[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100326233259.GI7166@nowhere>
Date: Sat, 27 Mar 2010 00:33:00 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Hitoshi Mitake <mitake@....info.waseda.ac.jp>
Cc: linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>, mingo@...e.hu,
paulus@...ba.org, tglx@...utronix.de, gregkh@...e.de
Subject: Re: [PATCH] Separate lock events with types
On Wed, Feb 24, 2010 at 06:02:46PM +0900, Hitoshi Mitake wrote:
> Sorry for my long silence...
>
> Thanks for Frederic's great work like trace_lock_class_init(),
> overhead of perf lock was reduced a lot.
> But still there is overhead which cannot be disregarded.
>
> So I'd like to suggest that separating lock trace events into each types of lock.
> e.g.
> trace_lock_acquire() -> spin_trace_lock_acquire(), rwlock_trace_lock_acquire()
> I think that mutex and spinlock are completely different things.
> And as I describe below, filtering at recording phase can reduce overhead of tracing.
>
> CAUTION:
> This patch is the proof of concept. The way this patch employes
> is different from one I described above. This patch adds if statement
> before trace_lock_*(). Implementation of separating events per types will be
> a big one, so this is an only trial edition for performance improvements.
Instead of having one different event for each type of locks,
I would rather suggest to add a "lock type" field in the (future)
lock_init_class class. This requires we implement event injection
properly before.
So if we store the lock type in the lockdep_map, we can just dump
the type on lock class initialization:
- on register_lock_class
- on event injection to catchup with lock that have already registered
That's what does my tree perf/inject (minus the lock type), but this
all require a redesign, in both ftrace and perf sides.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists