[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <493D300A.6080805@sgi.com>
Date: Mon, 08 Dec 2008 06:32:42 -0800
From: Mike Travis <travis@....com>
To: Avi Kivity <avi@...hat.com>
CC: Rusty Russell <rusty@...tcorp.com.au>,
kvm-devel <kvm@...r.kernel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] kvm: use modern cpumask primitives, no cpumask_t
on stack
Avi Kivity wrote:
> Rusty Russell wrote:
>> We're getting rid on on-stack cpumasks for large NR_CPUS.
>>
>> 1) Use cpumask_var_t and alloc_cpumask_var (a noop normally). Fallback
>> code is inefficient but never happens in practice.
>> 2) smp_call_function_mask -> smp_call_function_many
>> 3) cpus_clear, cpus_empty, cpu_set -> cpumask_clear, cpumask_empty,
>> cpumask_set_cpu.
>>
>> --- linux-2.6.orig/virt/kvm/kvm_main.c
>> +++ linux-2.6/virt/kvm/kvm_main.c
>> @@ -358,11 +358,23 @@ static void ack_flush(void *_completed)
>> void kvm_flush_remote_tlbs(struct kvm *kvm)
>> {
>> int i, cpu, me;
>> - cpumask_t cpus;
>> + cpumask_var_t cpus;
>> struct kvm_vcpu *vcpu;
>>
>> me = get_cpu();
>> - cpus_clear(cpus);
>> + if (!alloc_cpumask_var(&cpus, GFP_ATOMIC)) {
>> + /* Slow path on failure. Call everyone. */
>> + for (i = 0; i < KVM_MAX_VCPUS; ++i) {
>> + vcpu = kvm->vcpus[i];
>> + if (vcpu)
>> + set_bit(KVM_REQ_TLB_FLUSH, &vcpu->requests);
>> + }
>> + ++kvm->stat.remote_tlb_flush;
>> + smp_call_function_many(cpu_online_mask, ack_flush, NULL, 1);
>> + put_cpu();
>> + return;
>> + }
>> +
>>
>
> Wow, code duplication from Rusty. Things must be bad.
>
> Since we're in a get_cpu() here, how about a per_cpu static cpumask
> instead? I don't mind the inefficient fallback, just the duplication.
>
One thing to note is that when CPUMASK_OFFSTACK=n, then alloc_cpumask_var
returns a constant 1 and the duplicate code is not even compiled.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists