[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080728235133.GA7956@elte.hu>
Date: Tue, 29 Jul 2008 01:51:33 +0200
From: Ingo Molnar <mingo@...e.hu>
To: David Miller <davem@...emloft.net>
Cc: peterz@...radead.org, linux-kernel@...r.kernel.org
Subject: Re: combinatorial explosion in lockdep
* David Miller <davem@...emloft.net> wrote:
> From: David Miller <davem@...emloft.net>
> Date: Mon, 28 Jul 2008 15:37:02 -0700 (PDT)
>
> > I'm still digging on what exactly makes this happen, but I wanted to
> > get the information out as soon as I had something useful like this.
>
> As a simple hack, I tried mitigating these effects using a
> generation-count based "visited" scheme to bypass traversing
> lock_class chains we've already walked down.
>
> It seems to work.
nice!
Any chance to get the "cat /proc/lockdep*" output, so that we could see
and check the expected behavior of the full graph?
But with 128 cpus and double locking taking in the scheduler, there's
around 127*128/2 possible combinations of cpu rq lock ordering, that's
thousands of chains, and that takes a quadratic number of steps to
iterate via the current algorithm - which would be in the neighborhood
of 127^4 / 4, or about 65 million steps for each new change to the
graph. (but it's late and i'm not sure at all about the numbers so i
could be off by a few orders of magnitude, in either direction)
Plus the irqsafe/softirqsafe layers multiply things too.
Could you please also send the non-stack-iterator changes you did as
well - that's cool stuff too!
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists