lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALzav=d_UEEAvy6NcOLNWByuRzRqV=y2BWVXzyFMsS4zZwcZkA@mail.gmail.com>
Date:   Thu, 13 Oct 2022 15:56:31 -0700
From:   David Matlack <dmatlack@...gle.com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Isaku Yamahata <isaku.yamahata@...el.com>
Subject: Re: [PATCH v4 00/11] KVM: x86/mmu: Make tdp_mmu a read-only parameter

On Thu, Oct 13, 2022 at 1:12 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Thu, Oct 13, 2022, David Matlack wrote:
> > On Wed, Oct 12, 2022 at 11:17 AM Sean Christopherson <seanjc@...gle.com> wrote:
> > > I'm not dead set against having a dedicated TDP MMU page fault handler, but
> > > IMO it's not really better once the TDP MMU vs. shadow MMU is reduced to a
> > > static branch, just different.  The read vs. write mmu_lock is the most
> > > visible ugliness, and that can be buried in helpers if we really want to
> > > make the page fault handler easier on the eyes, e.g.
>
> ...
>
> > My preference is still separate handlers. When I am reading this code,
> > I only care about one path (TDP MMU or Shadow MMU, usually TDP MMU).
> > Having separate handlers makes it easy to read since I don't have to
> > care about the implementation details of the other MMU.
> >
> > And more importantly (but less certain), the TDP MMU fault handler is
> > going to diverge further from the Shadow MMU fault handler in the near
> > future. i.e. There will be more and more branches in a common fault
> > handler, and the value of having a common fault handler diminishes.
> > Specifically, to support moving the TDP MMU to common code, the TDP
> > MMU is no longer going to topup the same mem caches as the Shadow MMU
> > (TDP MMU is not going to use struct kvm_mmu_page), and the TDP MMU
> > will probably have its own fast_page_fault() handler eventually.
>
> What if we hold off on the split for the moment, and then revisit the handler when
> a common MMU is closer to reality?  I agree that a separate handler makes sense
> once things start diverging, but until that happens, supporting two flows instead
> of one seems like it would add (minor) maintenance cost without much benefit.

Sure thing. I'll do the split as part of my series to split out the
TDP MMU to common code and we can revisit the discussion then.

>
> > If we do go the common handler route, I don't prefer the
> > direct_page_fault_mmu_lock/unlock() wrapper since it further obscures
> > the differences between the 2 MMUs.
>
> Yeah, I don't like the wrappers either.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ