[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YTJL1yOC2iwHwd9B@google.com>
Date: Fri, 3 Sep 2021 16:22:47 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
"Dr. David Alan Gilbert" <dgilbert@...hat.com>,
Nitesh Narayan Lal <nitesh@...hat.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
Eduardo Habkost <ehabkost@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 4/8] KVM: Optimize kvm_make_vcpus_request_mask() a bit
On Fri, Sep 03, 2021, Vitaly Kuznetsov wrote:
> Iterating over set bits in 'vcpu_bitmap' should be faster than going
> through all vCPUs, especially when just a few bits are set.
>
> Drop kvm_make_vcpus_request_mask() call from kvm_make_all_cpus_request_except()
> to avoid handling the special case when 'vcpu_bitmap' is NULL, move the
> code to kvm_make_all_cpus_request_except() itself.
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
Reviewed-by: Sean Christopherson <seanjc@...gle.com>
> bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
> struct kvm_vcpu *except,
> unsigned long *vcpu_bitmap, cpumask_var_t tmp)
> {
> - int i, cpu, me;
> struct kvm_vcpu *vcpu;
> + int i, me;
> bool called;
Uber nit, if you're moving "int i, me;" to get reverse fir tree ordering, it
should be moved below "bool called;" as well, which you amusingly did do in the
below function :-)
>
> me = get_cpu();
>
...
> @@ -316,12 +323,23 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
> bool kvm_make_all_cpus_request_except(struct kvm *kvm, unsigned int req,
> struct kvm_vcpu *except)
> {
> + struct kvm_vcpu *vcpu;
> cpumask_var_t cpus;
> bool called;
> + int i, me;
Powered by blists - more mailing lists