lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADrL8HUW2q79F0FsEjhGW0ujij6+FfCqas5UpQp27Epfjc94Nw@mail.gmail.com>
Date: Thu, 13 Jun 2024 17:45:40 -0700
From: James Houghton <jthoughton@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Yu Zhao <yuzhao@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>, 
	Paolo Bonzini <pbonzini@...hat.com>, Ankit Agrawal <ankita@...dia.com>, 
	Axel Rasmussen <axelrasmussen@...gle.com>, Catalin Marinas <catalin.marinas@....com>, 
	David Matlack <dmatlack@...gle.com>, David Rientjes <rientjes@...gle.com>, 
	James Morse <james.morse@....com>, Jonathan Corbet <corbet@....net>, Marc Zyngier <maz@...nel.org>, 
	Oliver Upton <oliver.upton@...ux.dev>, Raghavendra Rao Ananta <rananta@...gle.com>, 
	Ryan Roberts <ryan.roberts@....com>, Shaoqin Huang <shahuang@...hat.com>, 
	Suzuki K Poulose <suzuki.poulose@....com>, Wei Xu <weixugc@...gle.com>, 
	Will Deacon <will@...nel.org>, Zenghui Yu <yuzenghui@...wei.com>, kvmarm@...ts.linux.dev, 
	kvm@...r.kernel.org, linux-arm-kernel@...ts.infradead.org, 
	linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v5 4/9] mm: Add test_clear_young_fast_only MMU notifier

On Tue, Jun 11, 2024 at 5:34 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Tue, Jun 11, 2024, James Houghton wrote:
> > On Tue, Jun 11, 2024 at 12:42 PM Sean Christopherson <seanjc@...gle.com> wrote:
> > > --
> > > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> > > index 7b77ad6cf833..07872ae00fa6 100644
> > > --- a/mm/mmu_notifier.c
> > > +++ b/mm/mmu_notifier.c
> > > @@ -384,7 +384,8 @@ int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
> > >
> > >  int __mmu_notifier_clear_young(struct mm_struct *mm,
> > >                                unsigned long start,
> > > -                              unsigned long end)
> > > +                              unsigned long end,
> > > +                              bool fast_only)
> > >  {
> > >         struct mmu_notifier *subscription;
> > >         int young = 0, id;
> > > @@ -393,9 +394,12 @@ int __mmu_notifier_clear_young(struct mm_struct *mm,
> > >         hlist_for_each_entry_rcu(subscription,
> > >                                  &mm->notifier_subscriptions->list, hlist,
> > >                                  srcu_read_lock_held(&srcu)) {
> > > -               if (subscription->ops->clear_young)
> > > -                       young |= subscription->ops->clear_young(subscription,
> > > -                                                               mm, start, end);
> > > +               if (!subscription->ops->clear_young ||
> > > +                   fast_only && !subscription->ops->has_fast_aging)
> > > +                       continue;
> > > +
> > > +               young |= subscription->ops->clear_young(subscription,
> > > +                                                       mm, start, end);
> >
> > KVM changing has_fast_aging dynamically would be slow, wouldn't it?
>
> No, it could/would be done quite quickly.  But, I'm not suggesting has_fast_aging
> be dynamic, i.e. it's not an "all aging is guaranteed to be fast", it's a "this
> MMU _can_ do fast aging".  It's a bit fuzzy/weird mostly because KVM can essentially
> have multiple secondary MMUs wired up to the same mmu_notifier.
>
> > I feel like it's simpler to just pass in fast_only into `clear_young` itself
> > (and this is how I interpreted what you wrote above anyway).
>
> Eh, maybe?  A "has_fast_aging" flag is more robust in the sense that it requires
> secondary MMUs to opt-in, i.e. all secondary MMUs will be considered "slow" by
> default.
>
> It's somewhat of a moot point because KVM is the only secondary MMU that implements
> .clear_young() and .test_young() (which I keep forgetting), and that seems unlikely
> to change.
>
> A flag would also avoid an indirect call and thus a RETPOLINE when CONFIG_RETPOLINE=y,
> i.e. would be a minor optimization when KVM doesn't suppport fast aging.  But that's
> probably a pretty unlikely combination, so it's probably not a valid argument.
>
> So, I guess I don't have a strong opinion?

(Sorry for the somewhat delayed response... spent some time actually
writing what this would look like.)

I see what you mean, thanks! So has_fast_aging might be set by KVM if
the architecture sets a Kconfig saying that it understands the concept
of fast aging, basically what the presence of this v5's
test_clear_young_fast_only() indicates.

>
> > > Double ugh.  Peeking ahead at the "failure" code, NAK to adding
> > > kvm_arch_young_notifier_likely_fast for all the same reasons I objected to
> > > kvm_arch_has_test_clear_young() in v1.  Please stop trying to do anything like
> > > that, I will NAK each every attempt to have core mm/ code call directly into KVM.
> >
> > Sorry to make you repeat yourself; I'll leave it out of v6. I don't
> > like it either, but I wasn't sure how important it was to avoid
> > calling into unnecessary notifiers if the TDP MMU were completely
> > disabled.
>
> If it's important, e.g. for performance, then the mmu_notifier should have a flag
> so that the behavior doesn't assume a KVM backend.   Hence my has_fast_aging
> suggestion.

Thanks! That makes sense.

> > > Anyways, back to this code, before we spin another version, we need to agree on
> > > exactly what behavior we want out of secondary MMUs.  Because to me, the behavior
> > > proposed in this version doesn't make any sense.
> > >
> > > Signalling failure because KVM _might_ have relevant aging information in SPTEs
> > > that require taking kvm->mmu_lock is a terrible tradeoff.  And for the test_young
> > > case, it's flat out wrong, e.g. if a page is marked Accessed in the TDP MMU, then
> > > KVM should return "young", not "failed".
> >
> > Sorry for this oversight. What about something like:
> >
> > 1. test (and maybe clear) A bits on TDP MMU
> > 2. If accessed && !should_clear: return (fast)
> > 3. if (fast_only): return (fast)
> > 4. If !(must check shadow MMU): return (fast)
> > 5. test (and maybe clear) A bits in shadow MMU
> > 6. return (slow)
>
> I don't understand where the "must check shadow MMU" in #4 comes from.  I also
> don't think it's necessary; see below.

I just meant `kvm_has_shadow_mmu_sptes()` or
`kvm_memslots_have_rmaps()`. I like the logic you suggest below. :)

> > Some of this reordering (and maybe a change from
> > kvm_shadow_root_allocated() to checking indirect_shadow_pages or
> > something else) can be done in its own patch.

So just to be clear, for test_young(), I intend to have a patch in v6
to elide the shadow MMU check if the TDP MMU indicates Accessed. Seems
like a pure win; no reason not to include it if we're making logic
changes here anyway.

> >
> > > So rather than failing the fast aging, I think what we want is to know if an
> > > mmu_notifier found a young SPTE during a fast lookup.  E.g. something like this
> > > in KVM, where using kvm_has_shadow_mmu_sptes() instead of kvm_memslots_have_rmaps()
> > > is an optional optimization to avoid taking mmu_lock for write in paths where a
> > > (very rare) false negative is acceptable.
> > >
> > >   static bool kvm_has_shadow_mmu_sptes(struct kvm *kvm)
> > >   {
> > >         return !tdp_mmu_enabled || READ_ONCE(kvm->arch.indirect_shadow_pages);
> > >   }
> > >
> > >   static int __kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range,
> > >                          bool fast_only)
> > >   {
> > >         int young = 0;
> > >
> > >         if (!fast_only && kvm_has_shadow_mmu_sptes(kvm)) {
> > >                 write_lock(&kvm->mmu_lock);
> > >                 young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap);
> > >                 write_unlock(&kvm->mmu_lock);
> > >         }
> > >
> > >         if (tdp_mmu_enabled && kvm_tdp_mmu_age_gfn_range(kvm, range))
> > >                 young = 1 | MMU_NOTIFY_WAS_FAST;

The most straightforward way (IMHO) to return something like `1 |
MMU_NOTIFY_WAS_FAST` up to the MMU notifier itself is to make
gfn_handler_t return int instead of bool.

In this v5, I worked around this need by using `bool *failed` in patch
5[1]. I think the way this is going to look now in v6 would be cleaner
by actually changing gfn_handler_t to return int, and then we can
write something like what you wrote here. What do you think?

[1]: https://lore.kernel.org/linux-mm/20240611002145.2078921-6-jthoughton@google.com/

> > I don't think this line is quite right. We might set
> > MMU_NOTIFY_WAS_FAST even when we took the mmu_lock. I understand what
> > you mean though, thanks.
>
> The name sucks, but I believe the logic is correct.  As posted here in v5, the
> MGRLU code wants to age both fast _and_ slow MMUs.  AIUI, the intent is to always
> get aging information, but only look around at other PTEs if it can be done fast.
>
>         if (should_walk_secondary_mmu()) {
>                 notifier_result =
>                         mmu_notifier_test_clear_young_fast_only(
>                                         vma->vm_mm, addr, addr + PAGE_SIZE,
>                                         /*clear=*/true);
>         }
>
>         if (notifier_result & MMU_NOTIFIER_FAST_FAILED)
>                 secondary_young = mmu_notifier_clear_young(vma->vm_mm, addr,
>                                                            addr + PAGE_SIZE);
>         else {
>                 secondary_young = notifier_result & MMU_NOTIFIER_FAST_YOUNG;
>                 notifier_was_fast = true;
>         }
>
> The change, relative to v5, that I am proposing is that MGLRU looks around if
> the page was young in _a_ "fast" secondary MMU, whereas v5 looks around if and
> only if _all_ secondary MMUs are fast.
>
> In other words, if a fast MMU had a young SPTE, look around _that_ MMU, via the
> fast_only flag.

Oh, yeah, that's a lot more intelligent than what I had. I think I
fully understand your suggestion; I guess we'll see in v6. :)

I wonder if this still makes sense if whether or not an MMU is "fast"
is determined by how contended some lock(s) are at the time. I think
it does, but I guess we can discuss more if it turns out that having
an architecture participate like this is actually something we want to
do (i.e., that performance results say it's a good idea).

Thanks Sean!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ