[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ca1f320b-dc06-48e0-b4f5-ce860a72f0e2@redhat.com>
Date: Wed, 3 Apr 2024 23:43:55 +0200
From: David Hildenbrand <david@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: David Matlack <dmatlack@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
David Stevens <stevensd@...omium.org>, Matthew Wilcox <willy@...radead.org>
Subject: Re: [RFC PATCH 0/4] KVM: x86/mmu: Rework marking folios
dirty/accessed
On 03.04.24 02:17, Sean Christopherson wrote:
> On Tue, Apr 02, 2024, David Hildenbrand wrote:
>> On 02.04.24 19:38, David Matlack wrote:
>>> On Wed, Mar 20, 2024 at 5:56 AM David Hildenbrand <david@...hat.com> wrote:
>>>> Secondary page tables are different to ordinary GUP, and KVM ends up
>>>> using GUP to some degree to simulate HW access; regarding NUMA-hinting,
>>>> KVM already revealed to be very different to all other GUP users. [1]
>>>>
>>>> And I recall that at some point I raised that we might want to have a
>>>> dedicate interface for these "mmu-notifier" based page table
>>>> synchonization mechanism.
>>>>
>>>> But KVM ends up setting folio dirty/access flags itself, like other GUP
>>>> users. I do wonder if secondary page tables should be messing with folio
>>>> flags *at all*, and if there would be ways to to it differently using PTEs.
>>>>
>>>> We make sure to synchronize the secondary page tables to the process
>>>> page tables using MMU notifiers: when we write-protect/unmap a PTE, we
>>>> write-protect/unmap the SPTE. Yet, we handle accessed/dirty completely
>>>> different.
>>>
>>> Accessed bits have the test/clear young MMU-notifiers. But I agree
>>> there aren't any notifiers for dirty tracking.
>>
>> Yes, and I am questioning if the "test" part should exist -- or if having a
>> spte in the secondary MMU should require the access bit to be set (derived
>> from the primary MMU). (again, my explanation about fake HW page table
>> walkers)
>>
>> There might be a good reason to do it like that nowadays, so I'm only
>> raising it as something I was wondering. Likely, frequent clearing of the
>> access bit would result in many PTEs in the secondary MMU getting
>> invalidated, requiring a new GUP-fast lookup where we would set the access
>> bit in the primary MMU PTE. But I'm not an expert on the implications with
>> MMU notifiers and access bit clearing.
>
> Ya. KVM already does something similar this for dirty logging, where SPTEs are
> write-protected, i.e. generate faults to track dirty status. But KVM x86 has a
> lot of code to mitigates the resulting pain, e.g. has a lockless fast-path to
> make the SPTE writable and propagate the dirty status to bitmaps, and userspace
> is heavily/carefully optimized to balance harvesting/clearing dirty status against
> guest activity.
>
> In general, assumptions that core MM makes about the cost of PTE modifications
> tend to fall apart for KVM. Much of that comes down to mmu_notifiers invalidations
> being an imperfect boundary between the primary MMU and secondary MMUs. E.g.
> any prot changes (NUMA balancing in particular) can have disastrous impact on KVM
> guests because those changes invalidate secondary MMUs at PMD granularity, and
> effective force KVM to do a remote TLB for every PMD that is affected, whereas
> the primary is able to batch TLB flushes until the entire VMA is processed.
>
> So yeah, forcing KVM to zap and re-fault SPTEs in order to do page aging would be
> comically slow for KVM guests without a crazy amount of optimization.
Right, so access bits are certainly special. Fortunately, they are not
as important to get right as dirty bits.
>
>>> Are there any cases where the primary MMU transfers the PTE dirty bit
>>> to the folio _other_ than zapping (which already has an MMU-notifier
>>> to KVM). If not then there might not be any reason to add a new
>>> notifier. Instead the contract should just be that secondary MMUs must
>>> also transfer their dirty bits to folios in sync (or before) the
>>> primary MMU zaps its PTE.
>>
>> Grepping for pte_mkclean(), there might be some cases. Many cases use MMU
>> notifier, because they either clear the PTE or also remove write
>> permissions.
>>
>> But these is madvise_free_pte_range() and
>> clean_record_shared_mapping_range()...->clean_record_pte(), that might only
>> clear the dirty bit without clearing/changing permissions and consequently
>> not calling MMU notifiers.
>
> Heh, I was staring at these earlier today. They all are wrapped with
> mmu_notifier_invalidate_range_{start,end}(), and KVM's hyper aggressive response
> to mmu_notifier invalidations ensure all KVM SPTEs get dropped.
>
>> Getting a writable PTE without the dirty bit set should be possible.
>>
>> So I am questioning whether having a writable PTE in the secondary MMU with
>> a clean PTE in the primary MMU should be valid to exist. It can exist today,
>> and I am not sure if that's the right approach.
>
> I came to the same conclusion (that it can exist today). Without (sane) FOLL_TOUCH,
> KVM could retrieve the new PTE (W=1,D=0) after an mmu_notifier invaliation, and
> thus end up with a writable SPTE that isn't dirty in core MM.
>
> For MADV_FREE, I don't see how KVM's current behavior of marking the folio dirty
> at zap time changes anything. Ah, try_to_unmap_one() checks folio_test_dirty()
> *after* invoking mmu_notifier_invalidate_range_start(), which ensures that KVM's
> dirty status is propagated to the folio, and thus try_to_unmap_one() keeps the
> folio.
Right, we have to check if it has been re-dirtied. And any unexpected
reference (i.e., GUP) could dirty it.
>
> Aha! But try_to_unmap_one() also checks that refcount==mapcount+1, i.e. will
> also keep the folio if it has been GUP'd. And __remove_mapping() explicitly states
> that it needs to play nice with a GUP'd page being marked dirty before the
> reference is dropped.
>
> * Must be careful with the order of the tests. When someone has
> * a ref to the folio, it may be possible that they dirty it then
> * drop the reference. So if the dirty flag is tested before the
> * refcount here, then the following race may occur:
>
> So while it's totally possible for KVM to get a W=1,D=0 PTE, if I'm reading the
> code correctly it's safe/legal so long as KVM either (a) marks the folio dirty
> while holding a reference or (b) marks the folio dirty before returning from its
> mmu_notifier_invalidate_range_start() hook, *AND* obviously if KVM drops its
> mappings in response to mmu_notifier_invalidate_range_start().
>
Yes, I agree that it should work in the context of vmscan. But (b) is
certainly a bit harder to swallow than "ordinary" (a) :)
As raised, if having a writable SPTE would imply having a writable+dirty
PTE, then KVM MMU code wouldn't have to worry about syncing any dirty
bits ever back to core-mm, so patch #2 would not be required. ... well,
it would be replaces by an MMU notifier that notifies about clearing the
PTE dirty bit :)
.. because, then, there is also a subtle difference between
folio_set_dirty() and folio_mark_dirty(), and I am still confused about
the difference and not competent enough to explain the difference ...
and KVM always does the former, while zapping code of pagecache folios
does the latter ... hm
Related note: IIRC, we usually expect most anon folios to be dirty.
kvm_set_pfn_dirty()->kvm_set_page_dirty() does an unconditional
SetPageDirty()->folio_set_dirty(). Doing a test-before-set might
frequently avoid atomic ops.
>>>> I once had the following idea, but I am not sure about all implications,
>>>> just wanted to raise it because it matches the topic here:
>>>>
>>>> Secondary page tables kind-of behave like "HW" access. If there is a
>>>> write access, we would expect the original PTE to become dirty, not the
>>>> mapped folio.
>>>
>>> Propagating SPTE dirty bits to folios indirectly via the primary MMU
>>> PTEs won't work for guest_memfd where there is no primary MMU PTE. In
>>> order to avoid having two different ways to propagate SPTE dirty bits,
>>> KVM should probably be responsible for updating the folio directly.
>>>
>>
>> But who really cares about access/dirty bits for guest_memfd?
>>
>> guest_memfd already wants to disable/bypass all of core-MM, so different
>> rules are to be expected. This discussion is all about integration with
>> core-MM that relies on correct dirty bits, which does not really apply to
>> guest_memfd.
>
> +1, this is one of many reasons I don't want to support swap/relcaim for guest_memfd.
> The instant guest_memfd gets involved with anything LRU related, the interactions
> and complexity skyrockets.
Agreed ... but it's unfortunate :/
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists