lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 25 Jan 2020 09:27:47 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Will Deacon <will@...nel.org>
Cc:     linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
        kernel-team@...roid.com, Michael Ellerman <mpe@...erman.id.au>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Segher Boessenkool <segher@...nel.crashing.org>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        Luc Van Oostenryck <luc.vanoostenryck@...il.com>,
        Arnd Bergmann <arnd@...db.de>,
        Peter Oberparleiter <oberpar@...ux.ibm.com>,
        Masahiro Yamada <masahiroy@...nel.org>,
        Nick Desaulniers <ndesaulniers@...gle.com>,
        ying.huang@...el.com, kirill.shutemov@...ux.intel.com
Subject: Re: [PATCH v2 05/10] READ_ONCE: Enforce atomicity for
 {READ,WRITE}_ONCE() memory accesses

On Thu, Jan 23, 2020 at 03:33:36PM +0000, Will Deacon wrote:
> {READ,WRITE}_ONCE() cannot guarantee atomicity for arbitrary data sizes.
> This can be surprising to callers that might incorrectly be expecting
> atomicity for accesses to aggregate structures, although there are other
> callers where tearing is actually permissable (e.g. if they are using
> something akin to sequence locking to protect the access).
> 
> Linus sayeth:
> 
>   | We could also look at being stricter for the normal READ/WRITE_ONCE(),
>   | and require that they are
>   |
>   | (a) regular integer types
>   |
>   | (b) fit in an atomic word
>   |
>   | We actually did (b) for a while, until we noticed that we do it on
>   | loff_t's etc and relaxed the rules. But maybe we could have a
>   | "non-atomic" version of READ/WRITE_ONCE() that is used for the
>   | questionable cases?
> 
> The slight snag is that we also have to support 64-bit accesses on 32-bit
> architectures, as these appear to be widespread and tend to work out ok
> if either the architecture supports atomic 64-bit accesses (x86, armv7)
> or if the variable being accesses represents a virtual address and
> therefore only requires 32-bit atomicity in practice.
> 
> Take a step in that direction by introducing a variant of
> 'compiletime_assert_atomic_type()' and use it to check the pointer
> argument to {READ,WRITE}_ONCE(). Expose __{READ,WRITE_ONCE}() variants
> which are allowed to tear and convert the two broken callers over to the
> new macros.

The build robot is telling me we also need this for m68k; they have:

  arch/m68k/include/asm/page.h:typedef struct { unsigned long pmd[16]; } pmd_t;

Commit 688272809fcce seems to suggest the below is actually wrong tho.

---
diff --git a/mm/gup.c b/mm/gup.c
index 7646bf993b25..62885dad5444 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -320,7 +320,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 	 * The READ_ONCE() will stabilize the pmdval in a register or
 	 * on the stack so that it will stop changing under the code.
 	 */
-	pmdval = READ_ONCE(*pmd);
+	pmdval = __READ_ONCE(*pmd);
 	if (pmd_none(pmdval))
 		return no_page_table(vma, flags);
 	if (pmd_huge(pmdval) && vma->vm_flags & VM_HUGETLB) {
@@ -345,7 +345,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 				  !is_pmd_migration_entry(pmdval));
 		if (is_pmd_migration_entry(pmdval))
 			pmd_migration_entry_wait(mm, pmd);
-		pmdval = READ_ONCE(*pmd);
+		pmdval = __READ_ONCE(*pmd);
 		/*
 		 * MADV_DONTNEED may convert the pmd to null because
 		 * mmap_sem is held in read mode

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ