[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090428163300.GD5978@nowhere>
Date: Tue, 28 Apr 2009 18:33:02 +0200
From: Frederic Weisbecker <fweisbec@...il.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc: Ingo Molnar <mingo@...e.hu>, Steven Rostedt <rostedt@...dmis.org>,
Li Zefan <lizf@...fujitsu.com>, linux-kernel@...r.kernel.org
Subject: Re: LTTng "TIF_KERNEL_TRACE"
On Tue, Apr 28, 2009 at 11:40:46AM -0400, Mathieu Desnoyers wrote:
> * Ingo Molnar (mingo@...e.hu) wrote:
> >
> > * Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca> wrote:
> >
> > > Hi Ingo,
> > >
> > > Looking at the current -tip tree, I notice that the
> > > TIF_SYSCALL_FTRACE flag is only implemented for x86.
> > >
> > > I have TIF_KERNEL_TRACE in my lttng tree which applies to all
> > > architectures to do the exact same thing :
> > >
> > > lttng-kernel-trace-thread-flag-alpha.patch
> > > lttng-kernel-trace-thread-flag-arm.patch
> > > lttng-kernel-trace-thread-flag-avr32.patch
> > > lttng-kernel-trace-thread-flag-blackfin.patch
> > > lttng-kernel-trace-thread-flag-cris.patch
> > > lttng-kernel-trace-thread-flag-frv.patch
> > > lttng-kernel-trace-thread-flag-h8300.patch
> > > lttng-kernel-trace-thread-flag-ia64.patch
> > > lttng-kernel-trace-thread-flag-m32r.patch
> > > lttng-kernel-trace-thread-flag-m68k.patch
> > > lttng-kernel-trace-thread-flag-mips.patch
> > > lttng-kernel-trace-thread-flag-parisc.patch
> > > lttng-kernel-trace-thread-flag-powerpc.patch
> > > lttng-kernel-trace-thread-flag-s390.patch
> > > lttng-kernel-trace-thread-flag-sh.patch
> > > lttng-kernel-trace-thread-flag-sparc.patch
> > > lttng-kernel-trace-thread-flag-um.patch
> > > lttng-kernel-trace-thread-flag-x86.patch
> > > lttng-kernel-trace-thread-flag-xtensa.patch
> > > lttng-kernel-trace-thread-flag-api.patch
> > >
> > > Is there any way we could get this merged ?
> > >
> > > One thing I like about the name TIF_KERNEL_TRACE compared to
> > > TIF_SYSCALL_FTRACE is that it gives us a per-thread flag that
> > > could eventually be used for more kernel tracing purposes than
> > > just syscalls.
> >
> > Yeah - TIF_KERNEL_TRACE indeed sounds more descriptive and less
> > restrictive. TIF_SYSCALL_FTRACE was a bit ad-hoc.
> >
>
> Second question :
>
> LTTng :
> read_lock(&tasklist_lock);
> do_each_thread(p, t) {
> set_tsk_thread_flag(t, TIF_KERNEL_TRACE);
> } while_each_thread(p, t);
> read_unlock(&tasklist_lock);
>
> Ftrace:
> read_lock_irqsave(&tasklist_lock, flags);
>
> do_each_thread(g, t) {
> clear_tsk_thread_flag(t, TIF_SYSCALL_FTRACE);
> } while_each_thread(g, t);
>
> read_unlock_irqrestore(&tasklist_lock, flags);
>
> With or without irqsave ?
>
> Arguments against irqsave for this read lock :
>
> - it's not used consistently for this read lock all over the kernel.
> Sometimes the read lock is taken without irqsave.
> - it can be a long iteration, and therefore disables interrupts for a
> long time.
>
> Arguments for irqsave for this read lock :
>
> - Taking any kind of spin/rwlock with inconsistent irq disabling leads
> to races where interrupts can be disabled for an unbounded amount of
> time if a spinlock with irqoff waits on a spinlock with irqs on. This
> is a general problem with current kernel rwlock usage. See my
> "priority sifting reader-writer lock" patchset for a fix to this
> problem.
>
> Mathieu
>
I don't know why I used irqsave here, I guess I was tired.
$ git-grep "read_lock_irqsave(&tasklist_lock)" | wc -l
0
$ git-grep "write_lock_irqsave(&tasklist_lock)" | wc -l
0
It is never used in an irq safe fashion, unless one of these sites
has the irqs disabled.
Lockdep should even have complained about this, when you hold
a lock class in an irq safe fashion and thereafter you try to hold
it in an irq unsafe fashion, then the state of the kernel becomes unsafe
and lockdep is supposed to complain about that.
Frederic.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists