lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8f573b40-3a5a-ed36-dffb-4a54faf3c4e1@virtuozzo.com>
Date:   Wed, 16 Oct 2019 15:19:50 +0300
From:   Andrey Ryabinin <aryabinin@...tuozzo.com>
To:     Daniel Axtens <dja@...ens.net>, kasan-dev@...glegroups.com,
        linux-mm@...ck.org, x86@...nel.org, glider@...gle.com,
        luto@...nel.org, linux-kernel@...r.kernel.org,
        mark.rutland@....com, dvyukov@...gle.com, christophe.leroy@....fr
Cc:     linuxppc-dev@...ts.ozlabs.org, gor@...ux.ibm.com
Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real
 shadow memory


On 10/14/19 4:57 PM, Daniel Axtens wrote:
> Hi Andrey,
> 
> 
>>> +	/*
>>> +	 * Ensure poisoning is visible before the shadow is made visible
>>> +	 * to other CPUs.
>>> +	 */
>>> +	smp_wmb();
>>
>> I'm not quite understand what this barrier do and why it needed.
>> And if it's really needed there should be a pairing barrier
>> on the other side which I don't see.
> 
> Mark might be better able to answer this, but my understanding is that
> we want to make sure that we never have a situation where the writes are
> reordered so that PTE is installed before all the poisioning is written
> out. I think it follows the logic in __pte_alloc() in mm/memory.c:
> 
> 	/*
> 	 * Ensure all pte setup (eg. pte page lock and page clearing) are
> 	 * visible before the pte is made visible to other CPUs by being
> 	 * put into page tables.
> 	 *
> 	 * The other side of the story is the pointer chasing in the page
> 	 * table walking code (when walking the page table without locking;
> 	 * ie. most of the time). Fortunately, these data accesses consist
> 	 * of a chain of data-dependent loads, meaning most CPUs (alpha
> 	 * being the notable exception) will already guarantee loads are
> 	 * seen in-order. See the alpha page table accessors for the
> 	 * smp_read_barrier_depends() barriers in page table walking code.
> 	 */
> 	smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */
> 
> I can clarify the comment.
> 

I don't see how is this relevant here.

barrier in __pte_alloc() for very the following case:

CPU 0							CPU 1
__pte_alloc():                                          pte_offset_kernel(pmd_t * dir, unsigned long address):
     pgtable_t new = pte_alloc_one(mm);                        pte_t *new = (pte_t *) pmd_page_vaddr(*dir) + ((address >> PAGE_SHIFT) & (PTRS_PER_PAGE - 1));  
     smp_wmb();                                                smp_read_barrier_depends();
     pmd_populate(mm, pmd, new);
							/* do something with pte, e.g. check if (pte_none(*new)) */


It's needed to ensure that if CPU1 sees pmd_populate() it also sees initialized contents of the 'new'.

In our case the barrier would have been needed if we had the other side like this:

if (!pte_none(*vmalloc_shadow_pte)) {
	shadow_addr = (unsigned long)__va(pte_pfn(*vmalloc_shadow_pte) << PAGE_SHIFT);
	smp_read_barrier_depends();
	*shadow_addr; /* read the shadow, barrier ensures that if we see installed pte, we will see initialized shadow memory. */
}


Without such other side the barrier is pointless.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ