[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Y0BcGnabCp9ukxDs@google.com>
Date: Fri, 7 Oct 2022 17:04:26 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Hao Peng <flyingpenghao@...il.com>
Cc: pbonzini@...hat.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH ] kvm: x86: Reduce unnecessary function call
On Fri, Oct 07, 2022, Hao Peng wrote:
> From: Peng Hao <flyingpeng@...cent.com>
>
> kvm->lock is held very close to mutex_is_locked(kvm->lock).
> Do not need to call mutex_is_locked.
>
> Signed-off-by: Peng Hao <flyingpeng@...cent.com>
> ---
> arch/x86/kvm/pmu.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index 02f9e4f245bd..8a7dbe2c469a 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -601,8 +601,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm
> *kvm, void __user *argp)
> sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL);
>
> mutex_lock(&kvm->lock);
> - filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter,
> - mutex_is_locked(&kvm->lock));
> + filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter, 1);
I'd prefer to keep the mutex_is_locked() call, even though it's quite silly, as
it self-documents what is being used to protect writes to pmu_event_filter.
The third paramter is evaluated iff CONFIG_PROVE_RCU=y, which is the complete
oppositive of performance sensitive, so in practice there's no real downside to
the somewhat superfluous call.
> mutex_unlock(&kvm->lock);
>
> synchronize_srcu_expedited(&kvm->srcu);
> --
> 2.27.0
Powered by blists - more mailing lists