[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1157884197.17849.125.camel@laptopd505.fenrus.org>
Date: Sun, 10 Sep 2006 12:29:57 +0200
From: Arjan van de Ven <arjan@...radead.org>
To: Andi Kleen <ak@...e.de>
Cc: Laurent Riffard <laurent.riffard@...e.fr>, mingo@...e.hu,
Andrew Morton <akpm@...l.org>,
Kernel development list <linux-kernel@...r.kernel.org>,
Jeremy Fitzhardinge <jeremy@...source.com>
Subject: Re: 2.6.18-rc6-mm1: GPF loop on early boot
On Sun, 2006-09-10 at 10:32 +0200, Andi Kleen wrote:
> On Sunday 10 September 2006 11:35, Laurent Riffard wrote:
> > Le 08.09.2006 10:13, Andrew Morton a écrit :
> > > ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.18-rc6/
> > >2.6.18-rc6-mm1/
> >
> > Hello,
> >
> > This kernel won't boot here: it starts a GPFs loop on
> > early boot. I attached a screenshot of the first GPF
> > (pause_on_oops=120 helped).
>
>
> It's lockdep's fault. This patch should fix it:
>
> In general from my experience lockdep seems to be a dependency nightmare.
> It uses far too much infrastructure far too early. Should we always disable
> lockdep very early (before interrupts are turned on) instead? (early
> everything is single threaded and will never have problems with lock
> ordering)
lockdep starts somewhere in the middle; I doubt it's the only thing that
assumes that current is valid at that point.
> /*
> - * Remove the lock to the list of currently held locks in a
> + * Remove the lock to the list of early_current()ly held locks in a
> * potentially non-nested (out of order) manner. This is a
> * relatively rare operation, as all the unlock APIs default
> * to nested mode (which uses lock_release()):
> @@ -2227,7 +2231,7 @@ lock_release_non_nested(struct task_stru
> int i;
>
> /*
> - * Check whether the lock exists in the current stack
> + * Check whether the lock exists in the early_current() stack
> * of held locks:
> */
??
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists