[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <796cff9b-8eb8-8c53-9127-318d30618952@google.com>
Date: Sat, 22 Oct 2022 17:42:18 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
cc: Will Deacon <will@...nel.org>, x86@...nel.org, willy@...radead.org,
torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
aarcange@...hat.com, kirill.shutemov@...ux.intel.com,
jroedel@...e.de, ubizjak@...il.com
Subject: Re: [PATCH 07/13] mm/gup: Fix the lockless PMD access
On Sat, 22 Oct 2022, Peter Zijlstra wrote:
> On architectures where the PTE/PMD is larger than the native word size
> (i386-PAE for example), READ_ONCE() can do the wrong thing. Use
> pmdp_get_lockless() just like we use ptep_get_lockless().
I thought that was something Will Deacon put a lot of effort
into handling around 5.8 and 5.9: see "strong prevailing wind" in
include/asm-generic/rwonce.h, formerly in include/linux/compiler.h.
Was it too optimistic? Did the wind drop?
I'm interested in the answer, but I've certainly no objection
to making this all more obviously robust - thanks.
Hugh
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> kernel/events/core.c | 2 +-
> mm/gup.c | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -7186,7 +7186,7 @@ static u64 perf_get_pgtable_size(struct
> return pud_leaf_size(pud);
>
> pmdp = pmd_offset_lockless(pudp, pud, addr);
> - pmd = READ_ONCE(*pmdp);
> + pmd = pmdp_get_lockless(pmdp);
> if (!pmd_present(pmd))
> return 0;
>
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2507,7 +2507,7 @@ static int gup_pmd_range(pud_t *pudp, pu
>
> pmdp = pmd_offset_lockless(pudp, pud, addr);
> do {
> - pmd_t pmd = READ_ONCE(*pmdp);
> + pmd_t pmd = pmdp_get_lockless(pmdp);
>
> next = pmd_addr_end(addr, end);
> if (!pmd_present(pmd))
Powered by blists - more mailing lists