[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5b4a0c30-118c-da1f-281c-130438a1c833@redhat.com>
Date: Wed, 28 Apr 2021 23:41:47 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Ben Gardon <bgardon@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
Peter Xu <peterx@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Peter Shier <pshier@...gle.com>,
Junaid Shahid <junaids@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH 5/6] KVM: x86/mmu: Protect kvm->memslots with a mutex
On 28/04/21 22:40, Ben Gardon wrote:
> ... However with the locking you propose below, we might still run
> into issues on a move or delete, which would mean we'd still need the
> separate memory allocation for the rmaps array. Or we do some
> shenanigans where we try to copy the rmap pointers from the other set
> of memslots.
If that's (almost) as easy as passing old to
kvm_arch_prepare_memory_region, that would be totally okay.
> My only worry is the latency this could add to a nested VM launch, but
> it seems pretty unlikely that that would be frequently coinciding with
> a memslot change in practice.
Right, memslot changes in practice occur only at boot and on hotplug.
If that was a problem we could always make the allocation state
off/in-progress/on, allowing to check the allocation state out of the
lock. This would only potentially slow down the first nested VM launch.
Paolo
Powered by blists - more mailing lists