lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 31 Jan 2009 17:19:59 -0800
From:	Yinghai Lu <yinghai@...nel.org>
To:	Eric Paris <eparis@...hat.com>, mingo@...hat.com
CC:	linux-kernel@...r.kernel.org, tglx@...utronix.de, hpa@...or.com
Subject: Re: irq_desc_lock_class is held when it is freed

Eric Paris wrote:
> On Sat, 2009-01-31 at 14:50 -0800, Yinghai Lu wrote:
>> Eric Paris wrote:
>>> I have an hp dl785g5 which is unable to successfully run
>>> 2.6.29-0.66.rc3.fc11.x86_64 or 2.6.29-rc2-next-20090126.  During bootup
>>> (early in userspace daemons starting) I get the below BUG, which quickly
>>> renders the machine dead.  I assume it is because sparse_irq_lock never
>>> gets released when the BUG kills that task.  But maybe it dies for some
>>> other reason, but the machine is dead, gotta hit the vitual power
>>> button.
>>>
>>> setting CONFIG_NUMA_MIGRATE_IRQ_DESC=n allowed me to successfully boot
>>> the linux-next kernel.  2.6.28 booted and worked fine (although it
>>> doesn't have this config option, so that isn't surprising.)
>>>
>>> attached you will find the linux-next config and dmesg output from the
>>> machine in question when booted using linux-next WITHOUT the
>>> MIGRATE_IRQ_DESC.  The only difference between the working and broken
>>> config is if CONFIG_NUMA_MIGRATE_IRQ_DESC is set or not.
>>>
>>> Please if there is anything I can collect, test, or do, don't hesitate
>>> to let me know.
>>>
>> it seems there is some merge problem... also there could be another problem...
>>
>> please check
>>
>> diff --git a/arch/x86/kernel/io_apic.c b/arch/x86/kernel/io_apic.c
>> index 6c02c25..610ce0a 100644
>> --- a/arch/x86/kernel/io_apic.c
>> +++ b/arch/x86/kernel/io_apic.c
>> @@ -2511,14 +2511,15 @@ static void irq_complete_move(struct irq_desc **descp)
>>  
>>  	vector = ~get_irq_regs()->orig_ax;
>>  	me = smp_processor_id();
>> +
>> +	if (vector == cfg->vector && cpumask_test_cpu(me, cfg->domain)) {
>>  #ifdef CONFIG_NUMA_MIGRATE_IRQ_DESC
>>  		*descp = desc = move_irq_desc(desc, me);
>>  		/* get the new one */
>>  		cfg = desc->chip_data;
>>  #endif
>> -
>> -	if (vector == cfg->vector && cpumask_test_cpu(me, cfg->domain))
>>  		send_cleanup_vector(cfg);
>> +	}
>>  }
>>  #else
>>  static inline void irq_complete_move(struct irq_desc **descp) {}
> 
> Looks like I got the same thing.....
> 

and

[PATCH] x86: fix lock status with numa_migrate_irq_desc

Impact: make the lock state consistent

adjust lock sequence...

Reported-by: Eric Paris <eparis@...hat.com>
Signed-off-by: Yinghai Lu <yinghai@...nel.org>

---
 kernel/irq/numa_migrate.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

Index: linux-2.6/kernel/irq/numa_migrate.c
===================================================================
--- linux-2.6.orig/kernel/irq/numa_migrate.c
+++ linux-2.6/kernel/irq/numa_migrate.c
@@ -78,7 +78,7 @@ static struct irq_desc *__real_move_irq_
 	desc = irq_desc_ptrs[irq];
 
 	if (desc && old_desc != desc)
-			goto out_unlock;
+		goto out_unlock;
 
 	node = cpu_to_node(cpu);
 	desc = kzalloc_node(sizeof(*desc), GFP_ATOMIC, node);
@@ -97,10 +97,14 @@ static struct irq_desc *__real_move_irq_
 	}
 
 	irq_desc_ptrs[irq] = desc;
+	spin_unlock_irqrestore(&sparse_irq_lock, flags);
 
 	/* free the old one */
 	free_one_irq_desc(old_desc, desc);
+	spin_unlock(&old_desc->lock);
 	kfree(old_desc);
+	spin_lock(&desc->lock);
+	return desc;
 
 out_unlock:
 	spin_unlock_irqrestore(&sparse_irq_lock, flags);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists