[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANgfPd-zgP8GD+yOwZ1V-S=r9W92hFSEBpmjLzaqZDg=O20Hnw@mail.gmail.com>
Date: Tue, 29 Sep 2020 10:40:34 -0700
From: Ben Gardon <bgardon@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
Cannon Matthews <cannonmatthews@...gle.com>,
Peter Xu <peterx@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Peter Shier <pshier@...gle.com>,
Peter Feiner <pfeiner@...gle.com>,
Junaid Shahid <junaids@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH 00/22] Introduce the TDP MMU
On Mon, Sep 28, 2020 at 10:31 AM Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 25/09/20 23:22, Ben Gardon wrote:
> > This series is the first of two. In this series we add a basic
> > implementation of the TDP MMU. In the next series we will improve the
> > performance of the TDP MMU and allow it to execute MMU operations
> > in parallel.
>
> I have finished rebasing and adding a few cleanups on top, but I don't
> have time to test it today. I think the changes shouldn't get too much
> in the way of the second series, but I've also pushed your v1 unmodified
> to kvm/tdp-mmu for future convenience. I'll await for your feedback in
> the meanwhile!
Awesome, thank you for the reviews and positive response! I'll get to
work responding to your comments and preparing a v2.
>
> One feature that I noticed is missing is the shrinker. What are your
> plans (or opinions) around it?
I assume by the shrinker you mean the page table quota that controls
how many pages the MMU can use at a time to back guest memory?
I think the shrinker is less important for the TDP MMU as there is an
implicit limit on how much memory it will use to back guest memory.
You could set the limit smaller than the number of pages required to
fully map the guest's memory, but I'm not really sure why you would
want to in a practical scenario. I understand the quota's importance
for x86 shadow paging and nested TDP scenarios where the guest could
cause KVM to allocate an unbounded amount of memory for page tables,
but the guest does not have this power in the non-nested TDP scenario.
Really, I didn't include it in this series because we haven't needed
it at Google and so I never implemented the quota enforcement. It
shouldn't be difficult to implement if you think it's worth having,
and it'll be needed to support nested on the TDP MMU (without using
the shadow MMU) anyway. If you're okay with leaving it out of the
initial patch set though, I'm inclined to do that.
>
> Also, the code generally assume a 64-bit CPU (i.e. that writes to 64-bit
> PTEs are atomic). That is not a big issue, it just needs a small change
> on top to make the TDP MMU conditional on CONFIG_X86_64.
Ah, that didn't occur to me. Thank you for pointing that out. I'll fix
that oversight in v2.
>
> Thanks,
>
> Paolo
>
Powered by blists - more mailing lists