lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140108131424.GE27046@suse.de>
Date:	Wed, 8 Jan 2014 13:14:24 +0000
From:	Mel Gorman <mgorman@...e.de>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Dave Jones <davej@...hat.com>,
	Darren Hart <dvhart@...ux.intel.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: [PATCH v2 1/1] mm: fix the theoretical compound_lock() vs
 prep_new_page() race

On Wed, Jan 08, 2014 at 11:54:00AM +0000, Mel Gorman wrote:
> On Sat, Jan 04, 2014 at 05:43:47PM +0100, Oleg Nesterov wrote:
> > On 01/03, Andrew Morton wrote:
> > >
> > > On Fri, 3 Jan 2014 20:55:47 +0100 Oleg Nesterov <oleg@...hat.com> wrote:
> > >
> > > > get/put_page(thp_tail) paths do get_page_unless_zero(page_head) +
> > > > compound_lock(). In theory this page_head can be already freed and
> > > > reallocated as alloc_pages(__GFP_COMP, smaller_order). In this case
> > > > get_page_unless_zero() can succeed right after set_page_refcounted(),
> > > > and compound_lock() can race with the non-atomic __SetPageHead().
> > >
> > > Would be useful to mention that these things are happening inside
> > > prep_compound_opage() (yes?).
> > 
> > Agreed. Added "in prep_compound_opage()" into the changelog:
> > 
> > 	get/put_page(thp_tail) paths do get_page_unless_zero(page_head) +
> > 	compound_lock(). In theory this page_head can be already freed and
> > 	reallocated as alloc_pages(__GFP_COMP, smaller_order). In this case
> > 	get_page_unless_zero() can succeed right after set_page_refcounted(),
> > 	and compound_lock() can race with the non-atomic __SetPageHead() in
> > 	prep_compound_page().
> > 
> > 	Perhaps we should rework the thp locking (under discussion), but
> > 	until then this patch moves set_page_refcounted() and adds wmb()
> > 	to ensure that page->_count != 0 comes as a last change.
> > 
> > 	I am not sure about other callers of set_page_refcounted(), but at
> > 	first glance they look fine to me.
> > 
> > or should I send v3?
> > 
> 
> This patch is putting a write barrier in the page allocator fast path and
> that is going to be a leading cause of Sad Face.

Peter Zijlstra correctly pointed out to me that on x86 that we generally
would not care/notice a write barrier as it almost always is a no-op.
X86 (which is all I test any more) can execute an sfence for a smp_wmb
but not in any configuration that matters. The previous barrier damage in
page_alloc.c was due to full barriers but I generally assume barriers have a
cost in core code when I see them regardless of the underlying architecture
details. So 99% of the time, we will not care and I won't be making Sad
Face but eventually someone using an affected architecture will whinge --
ppc64 probably as write barriers on sparc are compile barriers.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ