lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8737qzxd4i.fsf@linux.vnet.ibm.com>
Date:	Wed, 06 Apr 2016 15:39:17 +0530
From:	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
To:	Michal Hocko <mhocko@...nel.org>,
	Sukadev Bhattiprolu <sukadev@...ux.vnet.ibm.com>
Cc:	Michael Ellerman <mpe@...erman.id.au>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	linuxppc-dev@...ts.ozlabs.org, James Dykman <jdykman@...ibm.com>
Subject: Re: [PATCH 1/1] powerpc/mm: Add memory barrier in __hugepte_alloc()

Michal Hocko <mhocko@...nel.org> writes:

> [ text/plain ]
> On Tue 05-04-16 12:05:47, Sukadev Bhattiprolu wrote:
> [...]
>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
>> index d991b9e..081f679 100644
>> --- a/arch/powerpc/mm/hugetlbpage.c
>> +++ b/arch/powerpc/mm/hugetlbpage.c
>> @@ -81,6 +81,13 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
>>  	if (! new)
>>  		return -ENOMEM;
>>  
>> +	/*
>> +	 * Make sure other cpus find the hugepd set only after a
>> +	 * properly initialized page table is visible to them.
>> +	 * For more details look for comment in __pte_alloc().
>> +	 */
>> +	smp_wmb();
>> +
>
> what is the pairing memory barrier?
>
>>  	spin_lock(&mm->page_table_lock);
>>  #ifdef CONFIG_PPC_FSL_BOOK3E
>>  	/*

This is documented in __pte_alloc(). I didn't want to repeat the same
here.

	/*
	 * Ensure all pte setup (eg. pte page lock and page clearing) are
	 * visible before the pte is made visible to other CPUs by being
	 * put into page tables.
	 *
	 * The other side of the story is the pointer chasing in the page
	 * table walking code (when walking the page table without locking;
	 * ie. most of the time). Fortunately, these data accesses consist
	 * of a chain of data-dependent loads, meaning most CPUs (alpha
	 * being the notable exception) will already guarantee loads are
	 * seen in-order. See the alpha page table accessors for the
	 * smp_read_barrier_depends() barriers in page table walking code.
	 */
	smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */


-aneesh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ