lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 14 Oct 2014 17:32:46 -0400 (EDT)
From:	David Miller <>
Subject: Re: unaligned accesses in SLAB etc.

Date: Wed, 15 Oct 2014 00:19:36 +0300 (EEST)

>> > I'd like to know that your another problem is related to commit
>> > bf0dea23a9c0 ("mm/slab: use percpu allocator for cpu cache").  So,
>> > if the commit is reverted, your another problem is also gone
>> > completely?
>> The other problem has been present forever.
> Umm? I am afraid I have been describing it badly. This random 
> SIGBUS+SIGSEGV problem is new - I have not seen it before.

Sorry, I thought it was the same bug that causes git corruptions
for you.  I misunderstood.

> I have been able to do kernel compiles for years on sparc64 (modulo 
> specific bugs in specific configurations) and 3.17 + start/end swap 
> patch seems also stable for most machine. With yesterdays git + align 
> patch, it dies with SIGBUS multiple times during compilation so it's a 
> new regression for me.
> Will try reverting that commit tomorrow.

If that fails, please try to bisect, it will help us a lot.

> My only other current sparc64 problems that I am seeing - V210/V440 die 
> during bootup if compiled with gcc 4.9 and V480 dies with FATAL 
> exceptions during bootups since previous kernel release. Maybe also 
> exit_mmap warning - I do not know if they have been fixed, I see them 
> rarely.

The gcc-4.9 case is interesting, are you saying that a gcc-4.9 compiled
kernel works fine on other systems?
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists