[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3b7194b7-4616-0585-9a4a-91d42eebea9a@redhat.com>
Date: Mon, 5 Jun 2023 20:27:30 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Ben Gardon <bgardon@...gle.com>, Jim Mattson <jmattson@...gle.com>
Cc: Mingwei Zhang <mizhang@...gle.com>,
Sean Christopherson <seanjc@...gle.com>,
"H. Peter Anvin" <hpa@...or.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: x86/mmu: Remove KVM MMU write lock when accessing
indirect_shadow_pages
On 6/5/23 19:17, Ben Gardon wrote:
>>> - if (indirect_shadow_pages)
>>> + if (READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages))
>> I don't understand the need for READ_ONCE() here. That implies that
>> there is something tricky going on, and I don't think that's the case.
> Agree this doesn't need a READ_ONCE. Just a read is fine.
> kvm_mmu_unprotect_page starts by acquiring the MMU lock, so there's
> not much room to reorder anything anyway.
I agree that "there's not much room to reorder anything", but it's not a
particularly formal/reassuring statement :) and READ_ONCE/WRITE_ONCE
have documentation value too.
Or maybe it's because I've become used to the C11 memory model. Either
way, I like the idea to use READ_ONCE/WRITE_ONCE more explicitly
whenever there can be concurrent accesses (which would be data races in
the C11 model), and I think Linux is moving in that direction too.
Paolo
Powered by blists - more mailing lists