lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 31 Jan 2009 20:05:22 -0500
From:	Eric Paris <eparis@...hat.com>
To:	Yinghai Lu <yinghai@...nel.org>
Cc:	linux-kernel@...r.kernel.org, tglx@...utronix.de, mingo@...hat.com,
	hpa@...or.com
Subject: Re: irq_desc_lock_class is held when it is freed

On Sat, 2009-01-31 at 14:50 -0800, Yinghai Lu wrote:
> Eric Paris wrote:
> > I have an hp dl785g5 which is unable to successfully run
> > 2.6.29-0.66.rc3.fc11.x86_64 or 2.6.29-rc2-next-20090126.  During bootup
> > (early in userspace daemons starting) I get the below BUG, which quickly
> > renders the machine dead.  I assume it is because sparse_irq_lock never
> > gets released when the BUG kills that task.  But maybe it dies for some
> > other reason, but the machine is dead, gotta hit the vitual power
> > button.
> > 
> > setting CONFIG_NUMA_MIGRATE_IRQ_DESC=n allowed me to successfully boot
> > the linux-next kernel.  2.6.28 booted and worked fine (although it
> > doesn't have this config option, so that isn't surprising.)
> > 
> > attached you will find the linux-next config and dmesg output from the
> > machine in question when booted using linux-next WITHOUT the
> > MIGRATE_IRQ_DESC.  The only difference between the working and broken
> > config is if CONFIG_NUMA_MIGRATE_IRQ_DESC is set or not.
> > 
> > Please if there is anything I can collect, test, or do, don't hesitate
> > to let me know.
> > 
> it seems there is some merge problem... also there could be another problem...
> 
> please check
> 
> diff --git a/arch/x86/kernel/io_apic.c b/arch/x86/kernel/io_apic.c
> index 6c02c25..610ce0a 100644
> --- a/arch/x86/kernel/io_apic.c
> +++ b/arch/x86/kernel/io_apic.c
> @@ -2511,14 +2511,15 @@ static void irq_complete_move(struct irq_desc **descp)
>  
>  	vector = ~get_irq_regs()->orig_ax;
>  	me = smp_processor_id();
> +
> +	if (vector == cfg->vector && cpumask_test_cpu(me, cfg->domain)) {
>  #ifdef CONFIG_NUMA_MIGRATE_IRQ_DESC
>  		*descp = desc = move_irq_desc(desc, me);
>  		/* get the new one */
>  		cfg = desc->chip_data;
>  #endif
> -
> -	if (vector == cfg->vector && cpumask_test_cpu(me, cfg->domain))
>  		send_cleanup_vector(cfg);
> +	}
>  }
>  #else
>  static inline void irq_complete_move(struct irq_desc **descp) {}

Looks like I got the same thing.....

Starting Avahi daemon... [  OK  ]

=========================
[ BUG: held lock freed! ]
-------------------------
swapper/0 is freeing memory ffff88042c2196d0-ffff88042c2198cf, with a lock still held there!
 (&irq_desc_lock_class){++..}, at: [<ffffffff81093218>] handle_fasteoi_irq+0xa9/0xd4
2 locks held by swapper/0:
 #0:  (&irq_desc_lock_class){++..}, at: [<ffffffff81093218>] handle_fasteoi_irq+0xa9/0xd4
 #1:  (sparse_irq_lock){+...}, at: [<ffffffff810940a1>] move_irq_desc+0x69/0x1dc

stack backtrace:
Pid: 0, comm: swapper Not tainted 2.6.29-rc2-next-20090126 #2
Call Trace:
 <IRQ>  [<ffffffff8106b973>] debug_check_no_locks_freed+0x10d/0x14e
 [<ffffffff810d2f87>] kfree+0x99/0x117
 [<ffffffff810941e7>] ? move_irq_desc+0x1af/0x1dc
 [<ffffffff810941e7>] move_irq_desc+0x1af/0x1dc
 [<ffffffff810241dc>] irq_complete_move+0x7c/0x8c
 [<ffffffff8102490c>] ack_apic_level+0x22/0x74
 [<ffffffff81093218>] ? handle_fasteoi_irq+0xa9/0xd4
 [<ffffffff81093229>] handle_fasteoi_irq+0xba/0xd4
 [<ffffffff81013b15>] do_IRQ+0xd5/0x140
 [<ffffffff81011e13>] ret_from_intr+0x0/0x2e
 <EOI> <6>bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ