lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2c800ed1-d17a-def4-39e1-09281ee78d05@nvidia.com>
Date:   Sun, 23 Oct 2022 22:42:49 -0700
From:   John Hubbard <jhubbard@...dia.com>
To:     Peter Zijlstra <peterz@...radead.org>, x86@...nel.org,
        willy@...radead.org, torvalds@...ux-foundation.org,
        akpm@...ux-foundation.org, Jann Horn <jannh@...gle.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        aarcange@...hat.com, kirill.shutemov@...ux.intel.com,
        jroedel@...e.de, ubizjak@...il.com
Subject: Re: [PATCH 01/13] mm: Update ptep_get_lockless()s comment

On 10/22/22 04:14, Peter Zijlstra wrote:
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -260,15 +260,12 @@ static inline pte_t ptep_get(pte_t *ptep
>  
>  #ifdef CONFIG_GUP_GET_PTE_LOW_HIGH
>  /*
> - * WARNING: only to be used in the get_user_pages_fast() implementation.
> - *
> - * With get_user_pages_fast(), we walk down the pagetables without taking any
> - * locks.  For this we would like to load the pointers atomically, but sometimes
> - * that is not possible (e.g. without expensive cmpxchg8b on x86_32 PAE).  What
> - * we do have is the guarantee that a PTE will only either go from not present
> - * to present, or present to not present or both -- it will not switch to a
> - * completely different present page without a TLB flush in between; something
> - * that we are blocking by holding interrupts off.
> + * For walking the pagetables without holding any locks.  Some architectures
> + * (eg x86-32 PAE) cannot load the entries atomically without using expensive
> + * instructions.  We are guaranteed that a PTE will only either go from not
> + * present to present, or present to not present -- it will not switch to a
> + * completely different present page without a TLB flush inbetween; which we
> + * are blocking by holding interrupts off.


This is getting interesting. My latest understanding of this story is
that both the "before" and "after" versions of that comment are
incorrect! Because, as Jann Horn noticed recently [1], there might not
be any IPIs involved in a TLB flush, if x86 is running under a
hypervisor, and that breaks the chain of reasoning here.


[1] https://lore.kernel.org/all/CAG48ez3h-mnp9ZFC10v+-BW_8NQvxbwBsMYJFP8JX31o0B17Pg@mail.gmail.com/


thanks,
-- 
John Hubbard
NVIDIA

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ