[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201005200945.24901.dmitry.torokhov@gmail.com>
Date: Thu, 20 May 2010 09:45:24 -0700
From: Dmitry Torokhov <dmitry.torokhov@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Valdis.Kletnieks@...edu, linux-kernel@...r.kernel.org
Subject: Re: mmotm 2010-05-19 BUG weirdness...
On Thursday 20 May 2010 05:55:20 am Andrew Morton wrote:
> On Thu, 20 May 2010 11:47:50 -0400 Valdis.Kletnieks@...edu wrote:
> > On Wed, 19 May 2010 16:13:09 PDT, akpm@...ux-foundation.org said:
> > > The mm-of-the-moment snapshot 2010-05-19-16-12 has been uploaded to
> > >
> > > http://userweb.kernel.org/~akpm/mmotm/
> >
> > So I'm looking closer at the BUG I just posted
>
> I can't see that BUG report on lkml or in inbox.
>
> > - I had deleted two further
> > BUGs because they were obviously follow-ons to the original. But then...
> >
> > Note the following 2 lines:
> >
> > [ 35.357018] note: keymap[2481] exited with preempt_count 1
> > [ 35.360503] BUG: scheduling while atomic: keymap/2481/0x10000002
> >
> > The kernel reports the instigating process exited - and then reports it
> > as the offender for a "scheduling while atomic". Insufficient attempted
> > cleanup after the first BUG? Do we care because this is a sign of a
> > scheduler bug that could trip on a non-BUG as well, or is it "all bets
> > are off" because of the first BUG?
>
> Yes, the oops code will end up calling do_exit() to get rid of this
> process and to try to keep the machine limping along. So if you hit an
> oops with (say) a spinlock held, the task will end up calling do_exit()
> with a non-zero preempt_count.
>
> So the only problem I'm seeing here is .... Dmitry's ;)
Hmm, any chance you could stick a printk in input_set_keycode and print
the id/name of the input device?
--
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists