[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100504063705.GB10784@windriver.com>
Date: Tue, 4 May 2010 14:37:05 +0800
From: Yong Zhang <yong.zhang@...driver.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Yong Zhang <yong.zhang0@...il.com>, John Kacur <jkacur@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-rt-users <linux-rt-users@...r.kernel.org>,
Sven-Thorsten Dietrich <thebigcorporation@...il.com>,
Clark Williams <williams@...hat.com>,
"Luis Claudio R. Goncalves" <lgoncalv@...hat.com>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Gregory Haskins <ghaskins@...ell.com>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH] lockdep: reduce stack_trace usage
On Mon, May 03, 2010 at 02:11:29PM +0200, Peter Zijlstra wrote:
> OK, I like the idea, but I'm a little confused as to why you pull
> save_trace() up two functions,
I also want to avoid adding redundant trace in case of trylock.
On my machine, I catched about 150 trylock dependences.
> from what I can see we can now end up
> saving a trace where we previously would not have done one (the whole
> recursive lock mess.
Oh, Yes, this patch will not give result as expected. Thank you for pointing
it. I will respin it in V2.
>
> So please respin this with save_trace() in check_prev_add() right before
> the first add_lock_to_list().
No problem.
BTW, in case of trylock dependence, I will let check_prevs_add() carry a flag
to check_prev_add(). Thus, we can save more redundant trace. How do you
think about it?
Thanks,
Yong
>
> > Signed-off-by: Yong Zhang <yong.zhang0@...il.com>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Cc: Ingo Molnar <mingo@...hat.com>
> > Cc: David S. Miller <davem@...emloft.net>
> > ---
> > kernel/lockdep.c | 20 ++++++++++++--------
> > 1 files changed, 12 insertions(+), 8 deletions(-)
> >
> > diff --git a/kernel/lockdep.c b/kernel/lockdep.c
> > index 2594e1c..097d5fb 100644
> > --- a/kernel/lockdep.c
> > +++ b/kernel/lockdep.c
> > @@ -818,7 +818,8 @@ static struct lock_list *alloc_list_entry(void)
> > * Add a new dependency to the head of the list:
> > */
> > static int add_lock_to_list(struct lock_class *class, struct lock_class *this,
> > - struct list_head *head, unsigned long ip, int distance)
> > + struct list_head *head, unsigned long ip,
> > + int distance, struct stack_trace *trace)
> > {
> > struct lock_list *entry;
> > /*
> > @@ -829,11 +830,9 @@ static int add_lock_to_list(struct lock_class *class, struct lock_class *this,
> > if (!entry)
> > return 0;
> >
> > - if (!save_trace(&entry->trace))
> > - return 0;
> > -
> > entry->class = this;
> > entry->distance = distance;
> > + entry->trace = *trace;
> > /*
> > * Since we never remove from the dependency list, the list can
> > * be walked lockless by other CPUs, it's only allocation
> > @@ -1635,7 +1634,7 @@ check_deadlock(struct task_struct *curr, struct held_lock *next,
> > */
> > static int
> > check_prev_add(struct task_struct *curr, struct held_lock *prev,
> > - struct held_lock *next, int distance)
> > + struct held_lock *next, int distance, struct stack_trace *trace)
> > {
> > struct lock_list *entry;
> > int ret;
> > @@ -1694,14 +1693,14 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
> > */
> > ret = add_lock_to_list(hlock_class(prev), hlock_class(next),
> > &hlock_class(prev)->locks_after,
> > - next->acquire_ip, distance);
> > + next->acquire_ip, distance, trace);
> >
> > if (!ret)
> > return 0;
> >
> > ret = add_lock_to_list(hlock_class(next), hlock_class(prev),
> > &hlock_class(next)->locks_before,
> > - next->acquire_ip, distance);
> > + next->acquire_ip, distance, trace);
> > if (!ret)
> > return 0;
> >
> > @@ -1732,6 +1731,7 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
> > {
> > int depth = curr->lockdep_depth;
> > struct held_lock *hlock;
> > + struct stack_trace trace;
> >
> > /*
> > * Debugging checks.
> > @@ -1748,6 +1748,9 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
> > curr->held_locks[depth-1].irq_context)
> > goto out_bug;
> >
> > + if (!save_trace(&trace))
> > + return 0;
> > +
> > for (;;) {
> > int distance = curr->lockdep_depth - depth + 1;
> > hlock = curr->held_locks + depth-1;
> > @@ -1756,7 +1759,8 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
> > * added:
> > */
> > if (hlock->read != 2) {
> > - if (!check_prev_add(curr, hlock, next, distance))
> > + if (!check_prev_add(curr, hlock, next,
> > + distance, &trace))
> > return 0;
> > /*
> > * Stop after the first non-trylock entry,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists