lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080406234814.a40025fb.akpm@linux-foundation.org>
Date:	Sun, 6 Apr 2008 23:48:14 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Valdis.Kletnieks@...edu
Cc:	mingo@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: 2.6.25-rc8-mm1 - BUG: scheduling while atomic:
 swapper/0/0xffffffff

On Mon, 07 Apr 2008 02:21:22 -0400 Valdis.Kletnieks@...edu wrote:

> On Tue, 01 Apr 2008 21:32:14 PDT, Andrew Morton said:
> > ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.25-rc8/2.6.25-rc8-mm1/
> 
> Been seeing these crop up once in a while - can take hours after a reboot
> before I see the first one, but once I see one, I'm likely to see more, at
> a frequency of anywhere from ~5seconds to ~10 minutes between BUG msgs.
> 
> BUG: scheduling while atomic: swapper/0/0xffffffff
> Pid: 0, comm: swapper Tainted: P          2.6.25-rc8-mm1 #4
> 
> Call Trace:
>  [<ffffffff8020b2f4>] ? default_idle+0x0/0x74
>  [<ffffffff8022be19>] __schedule_bug+0x5d/0x61
>  [<ffffffff80552aea>] schedule+0x11a/0x9e4
>  [<ffffffff805536ce>] ? preempt_schedule+0x3c/0xaa
>  [<ffffffff802480f1>] ? hrtimer_forward+0x82/0x96
>  [<ffffffff804600a4>] ? cpuidle_idle_call+0x0/0xd5
>  [<ffffffff8020b2f4>] ? default_idle+0x0/0x74
>  [<ffffffff8020b2e0>] cpu_idle+0xf6/0x10a
>  [<ffffffff80540cb2>] rest_init+0x86/0x8a
> 
> Eventually, I end up with a basically hung system, and need to alt-sysrq-B.
> 
> Yes, I know it's tainted, and it's possible the root cause is a self-inflicted
> buggy module - but the traceback above seems odd.  Did some of my code manage
> to idle the CPU while is_atomic was set, or is the path from cpu_idle on down
> doing something it shouldn't be?

I'd say that there's an unlock missing somewhere.

> (I admit being confused - if my code was the source of the is_atomic error,
> shouldn't it have been caught on the *previous* call to schedule - the one
> that ran through all the queues and decided we should invoke idle?

Sounds sane.  Perhaps preempt_count is getting mucked up in interrupt
context?

iirc there's some toy in either the recently-added tracing code or still in
the -rt tree which would help find a missed unlock, but I forget what it was.
Ingo will know...


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ