lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e50030d-2289-4470-a727-a293baa21618@redhat.com>
Date: Mon, 15 Apr 2024 12:57:58 +0200
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>, Mark Rutland <mark.rutland@....com>,
 Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
 Jiri Olsa <jolsa@...nel.org>, Ian Rogers <irogers@...gle.com>,
 Adrian Hunter <adrian.hunter@...el.com>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Muchun Song <muchun.song@...ux.dev>
Cc: linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v1 0/4] Reduce cost of ptep_get_lockless on arm64

On 15.04.24 11:28, Ryan Roberts wrote:
> On 12/04/2024 21:16, David Hildenbrand wrote:
>>>
>>> Yes agreed - 2 types; "lockless walkers that later recheck under PTL" and
>>> "lockless walkers that never take the PTL".
>>>
>>> Detail: the part about disabling interrupts and TLB flush syncing is
>>> arch-specifc. That's not how arm64 does it (the hw broadcasts the TLBIs). But
>>> you make that clear further down.
>>
>> Yes, but disabling interrupts is also required for RCU-freeing of page tables
>> such that they can be walked safely. The TLB flush IPI is arch-specific and
>> indeed to sync against PTE invalidation (before generic GUP-fast).
>> [...]
>>
>>>>>
>>>>> Could it be this easy? My head is hurting...
>>>>
>>>> I think what has to happen is:
>>>>
>>>> (1) pte_get_lockless() must return the same value as ptep_get() as long as there
>>>> are no races. No removal/addition of access/dirty bits etc.
>>>
>>> Today's arm64 ptep_get() guarantees this.
>>>
>>>>
>>>> (2) Lockless page table walkers that later verify under the PTL can handle
>>>> serious "garbage PTEs". This is our page fault handler.
>>>
>>> This isn't really a property of a ptep_get_lockless(); its a statement about a
>>> class of users. I agree with the statement.
>>
>> Yes. That's a requirement for the user of ptep_get_lockless(), such as page
>> fault handlers. Well, mostly "not GUP".
>>
>>>
>>>>
>>>> (3) Lockless page table walkers that cannot verify under PTL cannot handle
>>>> arbitrary garbage PTEs. This is GUP-fast. Two options:
>>>>
>>>> (3a) pte_get_lockless() can atomically read the PTE: We re-check later if the
>>>> atomically-read PTE is still unchanged (without PTL). No IPI for TLB flushes
>>>> required. This is the common case. HW might concurrently set access/dirty bits,
>>>> so we can race with that. But we don't read garbage.
>>>
>>> Today's arm64 ptep_get() cannot garantee that the access/dirty bits are
>>> consistent for contpte ptes. That's the bit that complicates the current
>>> ptep_get_lockless() implementation.
>>>
>>> But the point I was trying to make is that GUP-fast does not actually care about
>>> *all* the fields being consistent (e.g. access/dirty). So we could spec
>>> pte_get_lockless() to say that "all fields in the returned pte are guarranteed
>>> to be self-consistent except for access and dirty information, which may be
>>> inconsistent if a racing modification occured".
>>
>> We *might* have KVM in the future want to check that a PTE is dirty, such that
>> we can only allow dirty PTEs to be writable in a secondary MMU. That's not there
>> yet, but one thing I was discussing on the list recently. Burried in:
>>
>> https://lkml.kernel.org/r/20240320005024.3216282-1-seanjc@google.com
>>
>> We wouldn't care about racing modifications, as long as MMU notifiers will
>> properly notify us when the PTE would lose its dirty bits.
>>
>> But getting false-positive dirty bits would be problematic.
>>
>>>
>>> This could mean that the access/dirty state *does* change for a given page while
>>> GUP-fast is walking it, but GUP-fast *doesn't* detect that change. I *think*
>>> that failing to detect this is benign.
>>
>> I mean, HW could just set the dirty/access bit immediately after the check. So
>> if HW concurrently sets the bit and we don't observe that change when we
>> recheck, I think that would be perfectly fine.
> 
> Yes indeed; that's my point - GUP-fast doesn't care about access/dirty (or
> soft-dirty or uffd-wp).
> 
> But if you don't want to change the ptep_get_lockless() spec to explicitly allow
> this (because you have the KVM use case where false-positive dirty is
> problematic), then I think we are stuck with ptep_get_lockless() as implemented
> for arm64 today.

At least regarding the dirty bit, we'd have to guarantee that if 
ptep_get_lockless() returns a false-positive dirty bit, that the PTE 
recheck would be able to catch that.

Would that be possible?

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ