lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFwyuVWHMq_oc_hfwWcu6RaPGSifXD9-adX2_TOa-L+PHA@mail.gmail.com>
Date:	Fri, 20 Mar 2015 17:47:51 -0700
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	David Ahern <david.ahern@...cle.com>
Cc:	linux-mm <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: 4.0.0-rc4: panic in free_block

On Fri, Mar 20, 2015 at 5:18 PM, David Ahern <david.ahern@...cle.com> wrote:
> On 3/20/15 4:49 PM, David Ahern wrote:
>>
>> I did ask around and apparently this bug is hit only with the new M7
>> processors. DaveM: that's why you are not hitting this.

Quite frankly, this smells even more like an architecture bug. It
could be anywhere: it could be a CPU memory ordering issue, a compiler
bug, or a missing barrier or other thing.

How confident are you in the M7 memory ordering rules? It's a fairly
new core, no? With new speculative reads etc? Maybe the Linux
spinlocks don't have the right serialization, and more aggressive
reordering in the new core shows a bug?

Looking at this code, if this is a race, I see a few things that are
worth checking out

 - it does a very much overlapping "memmove()". The
sparc/lib/memmove.S file looks suspiciously bad (is that a
byte-at-a-time loop? Is it even correctly checking overlap?)

 - it relies on both percpu data and a spinlock. I'm sure the sparc
spinlock code has been tested *extensively* with old cores, but maybe
some new speculative read ends up breaking them?

I'm assuming M7 still TSO and 'ldsub' has acquire semantics? Is it
configurable like some sparc versions? I'm wondering whether the
Solaris locks might have some extra memory barriers due to supporting
the other (weaker) sparc memory models, and maybe they hid some M7
"feature" by mistake...

*Some* of the sparc memcpy routines have odd membar's in them.  Why
would a TSO machine need a memory barrier inside a memcpy. That just
makes me go "Ehh?"

> Here's another data point: If I disable NUMA I don't see the problem.
> Performance drops, but no NULL pointer splats which would have been panics.

So the NUMA case triggers the per-node "n->shared" logic, which
*should* be protected by "n->list_lock". Maybe there is some bug there
- but since that code seems to do ok on x86-64 (and apparently older
sparc too), I really would look at arch-specific issues first.

                        Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ