[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090415102257.GA2617@redhat.com>
Date: Wed, 15 Apr 2009 12:22:57 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Zhaolei <zhaolei@...fujitsu.com>,
Steven Rostedt <rostedt@...dmis.org>,
Tom Zanussi <tzanussi@...il.com>, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/4] ftrace: introduce workqueue_handler_exit
tracepoint and rename workqueue_execution to workqueue_handler_entry
I am very sorry for delay. I am going to study these patches later.
Just one note,
> On Mon, Apr 13, 2009 at 02:53:01PM +0900, KOSAKI Motohiro wrote:
>
> > Subject: [PATCH] ftrace: introduce workqueue_handler_exit tracepoint and rename workqueue_execution to workqueue_handler_entry
> >
> > Entry/exit handler pair is useful common tracepoint technique. it can mesure handler consumption time.
...
> > static void run_workqueue(struct cpu_workqueue_struct *cwq)
> > {
> > @@ -282,7 +283,6 @@ static void run_workqueue(struct cpu_wor
> > */
> > struct lockdep_map lockdep_map = work->lockdep_map;
> > #endif
> > - trace_workqueue_execution(cwq->thread, work);
> > cwq->current_work = work;
> > list_del_init(cwq->worklist.next);
> > spin_unlock_irq(&cwq->lock);
> > @@ -291,7 +291,9 @@ static void run_workqueue(struct cpu_wor
> > work_clear_pending(work);
> > lock_map_acquire(&cwq->wq->lockdep_map);
> > lock_map_acquire(&lockdep_map);
> > + trace_workqueue_handler_entry(cwq->thread, work);
> > f(work);
> > + trace_workqueue_handler_exit(cwq->thread, work);
This doesn't look right. We must not use "work" after f(work). work->func()
can kfree its work.
That is why we copy lockdep_map beforehand. Perhaps ftrace should do
something similar.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists