[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87bmano0qp.fsf@vitty.brq.redhat.com>
Date: Tue, 31 Jul 2018 17:58:54 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH rebase/RFC 0/4] x86/kvm/nVMX: optimize MMU switch between L1 and L2
Paolo Bonzini <pbonzini@...hat.com> writes:
> This is a rebased version of Vitaly's RFC series. This isn't
> quite ready for application as I haven't yet thought through
> the interactions between the root_mmu/guest_mmu split and
> the multi-root caching.
>
> Speaking of the multi-root caching, it is a bit of a duplicate work
> with Vitaly's last three patches that avoided reinitialization if
> the parameters and CR3 matched, so the series got smaller too.
>
Thank you for the rebase,
it seems that with multi-root caching this series should just ignore CR3
changes for both root_mmu and guest_mmu: we now have two separate
'prev_roots' caches and these work well. However, we still can optimize
MMU re-initialization on L1->L2 and L2->L1 switches out using e.g. my
'scache' idea (which can be orthogonal to page_role check on CR3).
In my Hyper-V-on-KVM environment I'm seeing an additional 1000 CPU
cycles win for a nested vmexit.
I'll pull things together and re-send the whole series.
--
Vitaly
Powered by blists - more mailing lists