[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3d72790d-64be-7409-1d92-db7ec92b932b@redhat.com>
Date: Mon, 25 Oct 2021 18:05:05 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Maxim Levitsky <mlevitsk@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 0/4] KVM: x86: APICv cleanups
On 25/10/21 17:59, Sean Christopherson wrote:
>> No, checking for the update is worse and with this example, I can now point
>> my finger on why I preferred the VM check even before: because even though
>> the page fault path runs in vCPU context and uses a vCPU-specific role,
>> overall the page tables are still per-VM.
> Arguably the lack of incorporation into the page role is the underlying bug, and
> all the shenanigans with synchronizing updates are just workarounds for that bug.
> I.e. page tables are never strictly per-VM, they're per-role, but we fudge it in
> this case because we don't want to take on the overhead of maintaining two sets
> of page tables to handle APICv.
Yes, that makes sense as well:
- you can have simpler code by using the vCPU state, but then
correctness requires that the APICv state be part of the vCPU-specific
MMU state. That is, of the role.
- if you don't want to do that, because you want to maintain one set of
page tables only, the price to pay is the synchronization shenanigans,
both those involving apicv_update mutex^Wrwsem (which ensure no one uses
the old state) and those involving kvm_faultin_pfn/kvm_zap_pfn_range (to
ensure the one state used by the MMU is the correct one).
So it's a pick your poison situation.
Paolo
Powered by blists - more mailing lists