[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANgfPd_S=LjEs+s2UzcHZKfUHf+n498eSbfidpXNFXjJT8kxzw@mail.gmail.com>
Date: Wed, 28 Apr 2021 14:46:35 -0700
From: Ben Gardon <bgardon@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
Peter Xu <peterx@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Peter Shier <pshier@...gle.com>,
Junaid Shahid <junaids@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH 5/6] KVM: x86/mmu: Protect kvm->memslots with a mutex
On Wed, Apr 28, 2021 at 2:41 PM Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 28/04/21 22:40, Ben Gardon wrote:
> > ... However with the locking you propose below, we might still run
> > into issues on a move or delete, which would mean we'd still need the
> > separate memory allocation for the rmaps array. Or we do some
> > shenanigans where we try to copy the rmap pointers from the other set
> > of memslots.
>
> If that's (almost) as easy as passing old to
> kvm_arch_prepare_memory_region, that would be totally okay.
Unfortunately it's not quite that easy because it's all the slots
_besides_ the one being modified where we'd need to copy the rmaps.
>
> > My only worry is the latency this could add to a nested VM launch, but
> > it seems pretty unlikely that that would be frequently coinciding with
> > a memslot change in practice.
>
> Right, memslot changes in practice occur only at boot and on hotplug.
> If that was a problem we could always make the allocation state
> off/in-progress/on, allowing to check the allocation state out of the
> lock. This would only potentially slow down the first nested VM launch.
>
> Paolo
>
Powered by blists - more mailing lists