lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Sep 2009 18:00:22 +0900
From:	Tejun Heo <tj@...nel.org>
To:	Mel Gorman <mel@....ul.ie>
CC:	Sachin Sant <sachinp@...ibm.com>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Nick Piggin <npiggin@...e.de>,
	Christoph Lameter <cl@...ux-foundation.org>,
	heiko.carstens@...ibm.com, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [PATCH 1/3] slqb: Do not use DEFINE_PER_CPU for per-node data

Hello,

Mel Gorman wrote:
>>> Can you please post full dmesg showing the corruption? 
> 
> There isn't a useful dmesg available and my evidence that it's within the
> pcpu allocator is a bit weak.

I'd really like to see the memory layout, especially how far apart the
nodes are.

> Symptons are crashing within SLQB when a second CPU is brought up
> due to a bad data access with a declared per-cpu area. Sometimes
> it'll look like the value was NULL and other times it's a random.
> 
> The "per-cpu" area in this case is actually a per-node area. This implied that
> it was either racing (but the locking looked sound), a buffer overflow (but
> I couldn't find one) or the per-cpu areas were being written to by something
> else unrelated. I considered it possible that as the CPU and node numbers did
> not match up that the unused numbers were being freed up for use elsewhere. I
> haven't dug into the per-cpu implementation to see if this is a possibility.

I'm now working on ia64 percpu support and it had similar memory
corruption while initializing ipv4 snmp counters.  It turned out the
areas assigned to each cpu ended up too far away and the offsets
couldn't be honored in the vmalloc area.  This led to percpu alloc
failure.  ipv4 snmp doesn't verify allocation result and ends up
accessing NULL percpu pointers.  On ia64, this ends up accessing areas
right before cpu0 percpu area causing various interesting memory
corruptions.

>>> Also, if you apply the attached patch, does the added BUG_ON()
>>> trigger?
>>>   
>> I applied the three patches from Mel and one from Tejun.

Can you please apply only my patch?

> Thanks Sachin
> 
> Was there any useful result from Tejun's patch applied on its own?
> 
>> With these patches applied the machine boots past
>> the original reported SLQB problem, but then hangs
>> just after printing these messages.
>>
>> <6>ehea: eth0: Physical port up
>> <7>irq: irq 33539 on host null mapped to virtual irq 259
>> <6>ehea: External switch port is backup port
>> <7>irq: irq 33540 on host null mapped to virtual irq 260
>> <6>NET: Registered protocol family 10
>> ^^^^^^ Hangs at this point.
>>
>> Tejun, the above hang looks exactly the same as the one
>> i have reported here :
>>
>> http://lists.ozlabs.org/pipermail/linuxppc-dev/2009-September/075791.html
>>
>> This particular hang was bisected to the following patch
>>
>> powerpc64: convert to dynamic percpu allocator
>>
>> This hang can be recreated without SLQB. So i think this is a different
>> problem. 
>>
> 
> Was that bug ever resolved?

Nope, not yet.  I'm thinking it could be something similar tho.
Especially because it's failing while initializing NET too.  Can
someone please post boot log from the machine?

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ