[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a5fb86aa-7930-c258-5650-a4eea9c2e917@redhat.com>
Date: Fri, 7 May 2021 10:28:11 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Ben Gardon <bgardon@...gle.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Peter Xu <peterx@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Peter Shier <pshier@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>,
Kai Huang <kai.huang@...el.com>,
Keqian Zhu <zhukeqian1@...wei.com>
Subject: Re: [PATCH v3 8/8] KVM: x86/mmu: Lazily allocate memslot rmaps
On 06/05/21 20:42, Ben Gardon wrote:
> + /*
> + * memslots_have_rmaps is set and read in different lock contexts,
> + * so protect it with smp_load/store.
> + */
> + smp_store_release(&kvm->arch.memslots_have_rmaps, true);
Shorter and better: /* write rmap pointers before memslots_have_rmaps */
> + mutex_unlock(&kvm->slots_arch_lock);
> + return 0;
> +}
> +
> bool kvm_memslots_have_rmaps(struct kvm *kvm)
> {
> - return kvm->arch.memslots_have_rmaps;
> + /*
> + * memslots_have_rmaps is set and read in different lock contexts,
> + * so protect it with smp_load/store.
> + */
> + return smp_load_acquire(&kvm->arch.memslots_have_rmaps);
> }
>
Likewise, /* read memslots_have_rmaps before the rmaps themselves */
Paolo
Powered by blists - more mailing lists