[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200514233208.GI15847@linux.intel.com>
Date: Thu, 14 May 2020 16:32:08 -0700
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Peter Xu <peterx@...hat.com>
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Michael Tsirkin <mst@...hat.com>,
Julia Suvorova <jsuvorov@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>, x86@...nel.org
Subject: Re: [PATCH RFC 0/5] KVM: x86: KVM_MEM_ALLONES memory
On Thu, May 14, 2020 at 07:22:50PM -0400, Peter Xu wrote:
> On Thu, May 14, 2020 at 03:56:24PM -0700, Sean Christopherson wrote:
> > On Thu, May 14, 2020 at 06:05:16PM -0400, Peter Xu wrote:
> > > E.g., shm_open() with a handle and fill one 0xff page, then remap it to
> > > anywhere needed in QEMU?
> >
> > Mapping that 4k page over and over is going to get expensive, e.g. each
> > duplicate will need a VMA and a memslot, plus any PTE overhead. If the
> > total sum of the holes is >2mb it'll even overflow the mumber of allowed
> > memslots.
>
> What's the PTE overhead you mentioned? We need to fill PTEs one by one on
> fault even if the page is allocated in the kernel, am I right?
It won't require host PTEs for every page if it's a kernel page. I doubt
PTEs are a significant overhead, especially compared to memslots, but it's
still worth considering.
My thought was to skimp on both host PTEs _and_ KVM SPTEs by always sending
the PCI hole accesses down the slow MMIO path[*].
[*] https://lkml.kernel.org/r/20200514194624.GB15847@linux.intel.com
> 4K is only an example - we can also use more pages as the template. However I
> guess the kvm memslot count could be a limit.. Could I ask what's the normal
> size of this 0xff region, and its distribution?
>
> Thanks,
>
> --
> Peter Xu
>
Powered by blists - more mailing lists