lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 03 Jun 2008 16:55:26 +0530
From:	Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>
To:	Stephen Rothwell <sfr@...b.auug.org.au>
CC:	linux-next@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	Dhaval Giani <dhaval@...ux.vnet.ibm.com>, peterz@...radead.org,
	Ingo Molnar <mingo@...e.hu>, Andy Whitcroft <apw@...dowen.org>
Subject: [BUG] linux-next: Tree for June 2/3- oops at find_busiest_group()

Hi,

While booting up the next-20080602/20080603 kernel on the x86_64 machine, the
kernel panic's with

BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
IP: [<ffffffff80227fdb>] find_busiest_group+0x4ef/0x6b8
PGD 1e44a6067 PUD 1e45e9067 PMD 0 
Oops: 0000 [1] SMP 
last sysfs file: /sys/block/ram15/dev
CPU 3 
Modules linked in: aic79xx(+) scsi_transport_spi sd_mod scsi_mod ext3 jbd ehci_hcd ohci_hcd uhci_hcd
Pid: 0, comm: swapper Not tainted 2.6.26-rc4-next-20080602-autotest #1
RIP: 0010:[<ffffffff80227fdb>]  [<ffffffff80227fdb>] find_busiest_group+0x4ef/0x6b8
RSP: 0018:ffff8101e7187d50  EFLAGS: 00010206
RAX: 0000000000206400 RBX: 0000000000000000 RCX: 0000000000000818
RDX: 0000000000000818 RSI: 0000000000000818 RDI: 00000000000000c0
RBP: ffff8101e7187e60 R08: 000000000000003f R09: ffff81000104de00
R10: ffff8101e7187ec0 R11: 0000000000000018 R12: 0000000000000001
R13: 0000000000000002 R14: ffff810001056de0 R15: 0000000000001031
FS:  0000000000000000(0000) GS:ffff8101e70e04c0(0000) knlGS:0000000000000000
CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 0000000000000028 CR3: 00000001e45eb000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process swapper (pid: 0, threadinfo ffff8101e7180000, task ffff8101e717f610)
Stack:  ffff8101e7187ec0 ffff8101e7187ef4 00000000e7187d80 ffff8101e7187ee8
 0000000300000001 ffff810001068cc0 ffff81000104dde8 ffff81000104dde0
 0000000000000000 000000000040c400 000000000040c400 0000000000000000
Call Trace:
 <IRQ>  [<ffffffff8022b92e>] run_rebalance_domains+0x1a4/0x4e4
 [<ffffffff8021180b>] read_tsc+0x9/0x1c
 [<ffffffff80237ce4>] __do_softirq+0x5e/0xcd
 [<ffffffff8020cefc>] call_softirq+0x1c/0x28
 [<ffffffff8020e570>] do_softirq+0x2c/0x68
 [<ffffffff8021b2c8>] smp_apic_timer_interrupt+0x90/0xa8
 [<ffffffff80211e44>] mwait_idle+0x0/0x44
 [<ffffffff8020c9b6>] apic_timer_interrupt+0x66/0x70
 <EOI>  [<ffffffff80211e85>] mwait_idle+0x41/0x44
 [<ffffffff8020ab79>] cpu_idle+0x6d/0x8b


Code: 48 8b 9d 28 ff ff ff 4c 89 f8 4c 89 fa 48 29 ca 48 29 f0 48 39 d0 48 0f 47 c2 8b 53 28 48 8b 9d 30 ff ff ff 48 0f af c2 48 89 ca <8b> 4b 28 48 2b 95 48 ff ff ff 48 0f af d1 48 39 d0 48 0f 47 c2 
RIP  [<ffffffff80227fdb>] find_busiest_group+0x4ef/0x6b8
 RSP <ffff8101e7187d50>
CR2: 0000000000000028
---[ end trace 4e92db360de5f7b4 ]---
Kernel panic - not syncing: Aiee, killing interrupt handler!
Pid: 0, comm: swapper Tainted: G      D   2.6.26-rc4-next-20080602-autotest #1

Call Trace:
 <IRQ>  [<ffffffff80233470>] panic+0x86/0x144
 [<ffffffff80233fa9>] printk+0x4e/0x56
 [<ffffffff802361ac>] do_exit+0x71/0x67a
 [<ffffffff804791b1>] oops_begin+0x0/0x8c
 [<ffffffff8047b0cf>] do_page_fault+0x775/0x82e
 [<ffffffff802271a0>] enqueue_task+0x50/0x5b
 [<ffffffff80478df9>] error_exit+0x0/0x51
 [<ffffffff80227fdb>] find_busiest_group+0x4ef/0x6b8
 [<ffffffff8022b92e>] run_rebalance_domains+0x1a4/0x4e4
 [<ffffffff8021180b>] read_tsc+0x9/0x1c
 [<ffffffff80237ce4>] __do_softirq+0x5e/0xcd
 [<ffffffff8020cefc>] call_softirq+0x1c/0x28
 [<ffffffff8020e570>] do_softirq+0x2c/0x68
 [<ffffffff8021b2c8>] smp_apic_timer_interrupt+0x90/0xa8
 [<ffffffff80211e44>] mwait_idle+0x0/0x44
 [<ffffffff8020c9b6>] apic_timer_interrupt+0x66/0x70
 <EOI>  [<ffffffff80211e85>] mwait_idle+0x41/0x44
 [<ffffffff8020ab79>] cpu_idle+0x6d/0x8b

0xffffffff80225b77 is in find_busiest_group (kernel/sched.c:3124).
3119                            100*max_load <= sd->imbalance_pct*this_load)
3120                    goto out_balanced;
3121
3122            busiest_load_per_task /= busiest_nr_running;
3123            if (group_imb)
3124                    busiest_load_per_task = min(busiest_load_per_task, avg_load);
3125
3126            /*
3127             * We're trying to get all the cpus to the average_load, so we don't
3128             * want to push ourselves above the average load, nor do we wish to


BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
IP: [<ffffffff8022805b>] find_busiest_group+0x4ef/0x6b8
PGD 1e45e6067 PUD 1e45e5067 PMD 0 
Oops: 0000 [1] SMP 
last sysfs file: /sys/block/ram15/dev
CPU 3 
Modules linked in: jbd ehci_hcd ohci_hcd uhci_hcd
Pid: 520, comm: insmod Not tainted 2.6.26-rc4-next-20080603-autotest #1
RIP: 0010:[<ffffffff8022805b>]  [<ffffffff8022805b>] find_busiest_group+0x4ef/0x6b8
RSP: 0018:ffff8101e4673988  EFLAGS: 00010006
RAX: 00000000000c3400 RBX: 0000000000000000 RCX: 000000000000030c
RDX: 000000000000030c RSI: 000000000000030c RDI: 00000000000000c0
RBP: ffff8101e4673a98 R08: 000000000000003f R09: ffff81000104de20
R10: ffff8101e4673ae8 R11: 0000000000000018 R12: 0000000000000001
R13: 0000000000000001 R14: ffff810001056e00 R15: 0000000000000619
FS:  0000000000680850(0063) GS:ffff8101e70e04c0(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000028 CR3: 00000001e6867000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400xt3.ko module
Process insmod (pid: 520, threadinfo ffff8101e4672000, task ffff8101e729a8f0)
Stack:  ffff8101e4673ae8 ffff8101e4673b14 0000000200000002 ffff8101e4673b08
 0000000300000000 ffff810001068ce0 ffff81000104de08 ffff81000104de00
 0000000000000000 0000000000186400 0000000000186400 0000000000000000
Call Trace:
 [<ffffffff804775d2>] schedule+0x275/0x756
 [<ffffffff80477f1e>] schedule_timeout+0x1e/0xad
 [<ffffffff80227220>] enqueue_task+0x50/0x5b
 [<ffffffff80477cba>] wait_for_common+0xd5/0x118
 [<ffffffff802294c1>] default_wake_function+0x0/0xe
 [<ffffffff802444db>] __kthread_create+0x91/0xf6
 [<ffffffff802561d8>] stop_cpu+0x0/0x84
 [<ffffffff802f76f3>] avc_has_perm+0x49/0x5b
 [<ffffffff8024dd16>] rt_mutex_adjust_pi+0x18/0x5b
 [<ffffffff8022eb28>] sched_setscheduler+0x2e9/0x30d
 [<ffffffff802560b4>] __stop_machine_run+0xf1/0x1e7
 [<ffffffff80255fc0>] chill+0x0/0x3
 [<ffffffff8024f4f5>] __link_module+0x0/0x18
 [<ffffffff8021e4fc>] module_finalize+0x103/0x121
 [<ffffffff804781fe>] mutex_lock+0xd/0x1e
 [<ffffffff8024f4f5>] __link_module+0x0/0x18
 [<ffffffff802561c9>] stop_machine_run_notype+0x1f/0x2e
 [<ffffffff80250e30>] sys_init_module+0x1502/0x1a67
 [<ffffffff802c60b5>] mb_cache_entry_find_next+0x0/0xae
 [<ffffffff8020bd7b>] system_call_after_swapgs+0x7b/0x80


Code: 48 8b 9d 28 ff ff ff 4c 89 f8 4c 89 fa 48 29 ca 48 29 f0 48 39 d0 48 0f 47 c2 8b 53 28 48 8b 9d 30 ff ff ff 48 0f af c2 48 89 ca <8b> 4b 28 48 2b 95 48 ff ff ff 48 0f af d1 48 39 d0 48 0f 47 c2 
RIP  [<ffffffff8022805b>] find_busiest_group+0x4ef/0x6b8
 RSP <ffff8101e4673988>
CR2: 0000000000000028
---[ end trace 023424d038ec337b ]---   

0xffffffff8022805b is in find_busiest_group (kernel/sched.c:3151).
3146                    *imbalance = 0;
3147                    goto small_imbalance;
3148            }
3149
3150            /* Don't want to pull so many tasks that a group would go idle */
3151            max_pull = min(max_load - avg_load, max_load - busiest_load_per_task);
3152
3153            /* How much load to actually move to equalise the imbalance */
3154            *imbalance = min(max_pull * busiest->__cpu_power,
3155                                    (avg_load - this_load) * this->__cpu_power)

-- 
Thanks & Regards,
Kamalesh Babulal,
Linux Technology Center,
IBM, ISTL.

View attachment "config-next20080603" of type "text/plain" (72001 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ