lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <201003082333.49460.david@sicnarf.com>
Date:	Mon, 8 Mar 2010 23:33:49 +0100
From:	David Steiner <david@...narf.com>
To:	linux-kernel@...r.kernel.org
Cc:	Neil Brown <neilb@...e.de>
Subject: Bug report: scheduling while atomic: md0_raid5 - 2.6.33-rt4

Hi, I'm writing to report a problem that's occurring in the raid456 module. 
I'm running vanilla linux with real time patches applied 
(http://rt.wiki.kernel.org). 

Since I upgraded from 2.6.31.12-rt12 the following message comes frequently. 
Before there weren't such messages:

[193002.046563] BUG: scheduling while atomic: md0_raid5/0x00000001/809, CPU#1
[193002.046571] Modules linked in: i915 drm_kms_helper drm i2c_algo_bit video 
output ppdev lp parport xt_multiport iptable_filter ip_tables x_tables 
kvm_intel kvm fuse loop raw1394 arc4 ecb ath5k mac80211 ath cfg80211 psmouse 
usbhid rfkill hid i2c_i801 serio_raw evdev rng_core led_class i2c_core raid456 
md_mod async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx 
ide_gd_mod ata_generic ata_piix ide_pci_generic ahci libata ohci1394 piix 
uhci_hcd pdc202xx_new scsi_mod ieee1394 e1000e ehci_hcd ide_core intel_agp 
[last unloaded: scsi_wait_scan]
[193002.046664] Pid: 809, comm: md0_raid5 Not tainted 2.6.33-rt4 #1
[193002.046668] Call Trace:
[193002.046682]  [<ffffffff812fc212>] ? __schedule+0x83/0x7dd
[193002.046692]  [<ffffffff810680d3>] ? task_blocks_on_rt_mutex+0x14b/0x19f
[193002.046700]  [<ffffffff812fca5b>] ? schedule+0x10/0x22
[193002.046707]  [<ffffffff812fd582>] ? rt_spin_lock_slowlock+0x14b/0x234
[193002.046721]  [<ffffffffa01a0fcc>] ? release_stripe+0x1a/0x31 [raid456]
[193002.046730]  [<ffffffffa0156402>] ? async_xor+0x402/0x413 [async_xor]
[193002.046741]  [<ffffffffa0133991>] ? ide_do_rw_disk+0x217/0x299 [ide_gd_mod]
[193002.046751]  [<ffffffffa01a2fbd>] ? __raid_run_ops+0x961/0xb61 [raid456]
[193002.046761]  [<ffffffffa01a118d>] ? ops_complete_reconstruct+0x0/0x91 
[raid456]
[193002.046772]  [<ffffffffa01a56df>] ? handle_stripe+0x17aa/0x1815 [raid456]
[193002.046782]  [<ffffffffa01a5ae3>] ? raid5d+0x399/0x3da [raid456]
[193002.046796]  [<ffffffffa0187c50>] ? md_thread+0xf2/0x110 [md_mod]
[193002.046803]  [<ffffffff81058c12>] ? autoremove_wake_function+0x0/0x2a
[193002.046815]  [<ffffffffa0187b5e>] ? md_thread+0x0/0x110 [md_mod]
[193002.046823]  [<ffffffff81058880>] ? kthread+0x75/0x7d
[193002.046831]  [<ffffffff81036068>] ? finish_task_switch+0x49/0xc8
[193002.046839]  [<ffffffff81009824>] ? kernel_thread_helper+0x4/0x10
[193002.046847]  [<ffffffff8105880b>] ? kthread+0x0/0x7d
[193002.046853]  [<ffffffff81009820>] ? kernel_thread_helper+0x0/0x10

$ dmesg | grep BUG
[182968.276011] BUG: scheduling while atomic: md0_raid5/0x00000001/809, CPU#1
[.... 55 ommitted .....]
[193002.046563] BUG: scheduling while atomic: md0_raid5/0x00000001/809, CPU#1

$ cat /proc/version 
Linux version 2.6.33-rt4 (root@...dbox) (gcc version 4.4.3 (Debian 4.4.3-3) ) 
#1 SMP PREEMPT RT Sat Feb 27 22:35:34 CET 2010

According to Neil Brown, this problem was introduced by the real-time patches.

This is a root-on-raid5, 3 disk-IDE-attached setup. If you need any more info 
let me know.  Please CC me any replies as I'm not subscribed. 
Greetings,
David

View attachment "proc_cpuinfo" of type "text/plain" (1458 bytes)

View attachment "proc_iomem" of type "text/plain" (1576 bytes)

View attachment "proc_ioports" of type "text/plain" (1631 bytes)

View attachment "proc_modules" of type "text/plain" (2952 bytes)

View attachment "lspci" of type "text/plain" (16492 bytes)

View attachment "ver_linux" of type "text/plain" (1114 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ