lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140109213657.GD1141@redhat.com>
Date:	Thu, 9 Jan 2014 22:36:57 +0100
From:	Andrea Arcangeli <aarcange@...hat.com>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	Mel Gorman <mgorman@...e.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Dave Jones <davej@...hat.com>,
	Darren Hart <dvhart@...ux.intel.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: [PATCH v2 1/1] mm: fix the theoretical compound_lock() vs
 prep_new_page() race

On Thu, Jan 09, 2014 at 08:43:50PM +0100, Oleg Nesterov wrote:
> get_page_unless_zero(), and recently it was documented that the kernel can
> rely on the control dependency to serialize LOAD + STORE.

Ok, that's cheap then.

> 
> But we probably need barrier() in between, we can't use ACCESS_ONCE().

After get_page_unless_zero I don't think there's any need of
barrier(). barrier() should have been implicit in __atomic_add_unless
in fact it should be a full smp_mb() equivalent too. Memory is always
clobbered there and the asm is volatile.

My wondering was only about the runtime (not compiler) barrier after
running PageTail and before compound_lock, because bit_spin_lock has
only acquire semantics so in absence of the branch that bails out the
lock, the spinlock could run before PageTail. If the branch is good
enough guarantee for all archs it's good and cheap solution. Clearly
this is not an x86 concern, which is always fine without anything when
surrounded by locked ops like here.

> Yes. But in this case I really think we should cleanup get/put first
> and add the helper, like the patch I mentioned does.

Ok, up to you, I'll check it.

> Oh, I don't think this highly theoreitical fix should be backported
> but I agree, lets fix the bug first.

I think it should, but the risk free version of it, so either a few
liner addition before compound_lock or the previous with smp_wmb()
inside the ifdef (considering it only matters on x86 where smp_wmb is
zero cost and the only operational change is actually the reordering
of the set_page_refcounted not the smp_wmb irrelevant for x86).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ