lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200822031944.GA4769@sjchrist-ice>
Date:   Fri, 21 Aug 2020 20:19:44 -0700
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     Vitaly Kuznetsov <vkuznets@...hat.com>, kvm@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Peter Xu <peterx@...hat.com>,
        Julia Suvorova <jsuvorov@...hat.com>,
        Andy Lutomirski <luto@...nel.org>,
        Andrew Jones <drjones@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/3] KVM: x86: introduce KVM_MEM_PCI_HOLE memory

On Thu, Aug 20, 2020 at 09:46:25PM -0400, Michael S. Tsirkin wrote:
> On Mon, Aug 17, 2020 at 09:32:07AM -0700, Sean Christopherson wrote:
> > On Fri, Aug 14, 2020 at 10:30:14AM -0400, Michael S. Tsirkin wrote:
> > > On Thu, Aug 13, 2020 at 07:31:39PM -0700, Sean Christopherson wrote:
> > > > > @@ -2318,6 +2338,11 @@ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn,
> > > > >  	int r;
> > > > >  	unsigned long addr;
> > > > >  
> > > > > +	if (unlikely(slot && (slot->flags & KVM_MEM_PCI_HOLE))) {
> > > > > +		memset(data, 0xff, len);
> > > > > +		return 0;
> > > > > +	}
> > > > 
> > > > This feels wrong, shouldn't we be treating PCI_HOLE as MMIO?  Given that
> > > > this is performance oriented, I would think we'd want to leverage the
> > > > GPA from the VMCS instead of doing a full translation.
> > > > 
> > > > That brings up a potential alternative to adding a memslot flag.  What if
> > > > we instead add a KVM_MMIO_BUS device similar to coalesced MMIO?  I think
> > > > it'd be about the same amount of KVM code, and it would provide userspace
> > > > with more flexibility, e.g. I assume it would allow handling even writes
> > > > wholly within the kernel for certain ranges and/or use cases, and it'd
> > > > allow stuffing a value other than 0xff (though I have no idea if there is
> > > > a use case for this).
> > > 
> > > I still think down the road the way to go is to map
> > > valid RO page full of 0xff to avoid exit on read.
> > > I don't think a KVM_MMIO_BUS device will allow this, will it?
> > 
> > No, it would not, but adding KVM_MEM_PCI_HOLE doesn't get us any closer to
> > solving that problem either.
> 
> I'm not sure why. Care to elaborate?

The bulk of the code in this series would get thrown away if KVM_MEM_PCI_HOLE
were reworked to be backed by a physical page.  If we really want a physical
page, then let's use a physical page from the get-go.

I realize I suggested the specialized MMIO idea, but that's when I thought the
primary motivation was memory, not performance.

> > What if we add a flag to allow routing all GFNs in a memslot to a single
> > HVA?
> 
> An issue here would be this breaks attempts to use a hugepage for this.

What are the performance numbers of hugepage vs. aggressively prefetching
SPTEs?  Note, the unbounded prefetching from the original RFC won't fly,
but prefetching 2mb ranges might be reasonable.

Reraising an earlier unanswered question, is enlightening the guest an
option for this use case?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ