[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4870FF0E.80200@goop.org>
Date: Sun, 06 Jul 2008 10:21:18 -0700
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Jens Axboe <jens.axboe@...cle.com>
CC: linux-kernel@...r.kernel.org, peterz@...radead.org,
npiggin@...e.de, linux-arch@...r.kernel.org, mingo@...e.hu,
paulmck@...ux.vnet.ibm.com
Subject: Re: [PATCH 1/10] Add generic helpers for arch IPI function calls
Jens Axboe wrote:
> +/**
> + * __smp_call_function_single(): Run a function on another CPU
> + * @cpu: The CPU to run on.
> + * @data: Pre-allocated and setup data structure
> + *
> + * Like smp_call_function_single(), but allow caller to pass in a pre-allocated
> + * data structure. Useful for embedding @data inside other structures, for
> + * instance.
> + *
> + */
> +void __smp_call_function_single(int cpu, struct call_single_data *data)
>
Does this have any users? Proposed users?
> +{
> + /* Can deadlock when called with interrupts disabled */
> + WARN_ON((data->flags & CSD_FLAG_WAIT) && irqs_disabled());
>
Can it be reasonably be used without FLAG_WAIT set? If it doesn't have
FLAG_WAIT, it must have FLAG_ALLOC set, but it can't have FLAG_ALLOC if
the csd structure is embedded in something else.
> +
> + generic_exec_single(cpu, data);
> +}
> +
> +/**
> + * smp_call_function_mask(): Run a function on a set of other CPUs.
> + * @mask: The set of cpus to run on.
> + * @func: The function to run. This must be fast and non-blocking.
> + * @info: An arbitrary pointer to pass to the function.
> + * @wait: If true, wait (atomically) until function has completed on other CPUs.
> + *
> + * Returns 0 on success, else a negative status code.
> + *
> + * If @wait is true, then returns once @func has returned.
> + *
> + * You must not call this function with disabled interrupts or from a
> + * hardware interrupt handler or from a bottom half handler.
> + */
> +int smp_call_function_mask(cpumask_t mask, void (*func)(void *), void *info,
> + int wait)
> +{
> + struct call_function_data d;
> + struct call_function_data *data = NULL;
> + cpumask_t allbutself;
> + unsigned long flags;
> + int cpu, num_cpus;
> +
> + /* Can deadlock when called with interrupts disabled */
> + WARN_ON(irqs_disabled());
> +
> + cpu = smp_processor_id();
>
What prevents us from changing cpus? Are all callers supposed to
disable preemption? If so, should we WARN for it?
> + allbutself = cpu_online_map;
> + cpu_clear(cpu, allbutself);
> + cpus_and(mask, mask, allbutself);
> + num_cpus = cpus_weight(mask);
>
Ditto: What prevents the online map changing under our feet? Is
disabling preemption enough?
> +
> + /*
> + * If zero CPUs, return. If just a single CPU, turn this request
> + * into a targetted single call instead since it's faster.
> + */
> + if (!num_cpus)
> + return 0;
> + else if (num_cpus == 1) {
> + cpu = first_cpu(mask);
> + return smp_call_function_single(cpu, func, info, 0, wait);
> + }
>
It's weirdly inconsistent that smp_call_function_mask will ignore the
current cpu if its in the mask, but smp_call_function_single will call
the function locally if the specified cpu is the current one. I know
this is a hold-over from the old code, but could we finally fix it now?
I guess it's still reasonable for smp_call_function to call on
everywhere but current cpu, but it can do its own mask manipulation to
sort that out.
> +
> + if (!wait) {
> + data = kmalloc(sizeof(*data), GFP_ATOMIC);
> + if (data)
> + data->csd.flags = CSD_FLAG_ALLOC;
> + }
> + if (!data) {
> + data = &d;
> + data->csd.flags = CSD_FLAG_WAIT;
> + }
> +
> + spin_lock_init(&data->lock);
> + data->csd.func = func;
> + data->csd.info = info;
> + data->refs = num_cpus;
> +
> + /*
> + * need to see above stores before the cpumask is valid for the CPU
> + */
> + smp_wmb();
> + data->cpumask = mask;
> +
> + spin_lock_irqsave(&call_function_lock, flags);
> + list_add_tail_rcu(&data->csd.list, &call_function_queue);
> + spin_unlock_irqrestore(&call_function_lock, flags);
> +
> + /* Send a message to all CPUs in the map */
> + arch_send_call_function_ipi(mask);
> +
> + /* optionally wait for the CPUs to complete */
> + if (wait)
>
Should this test for data->csd.flags & CDS_FLAG_WAIT? Or should the "if
(!data)" path above also set "wait"?
> + csd_flag_wait(&data->csd);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL(smp_call_function_mask);
> +
> +/**
> + * smp_call_function(): Run a function on all other CPUs.
> + * @func: The function to run. This must be fast and non-blocking.
> + * @info: An arbitrary pointer to pass to the function.
> + * @natomic: Unused
> + * @wait: If true, wait (atomically) until function has completed on other CPUs.
> + *
> + * Returns 0 on success, else a negative status code.
> + *
> + * If @wait is true, then returns once @func has returned; otherwise
> + * it returns just before the target cpu calls @func.
> + *
> + * You must not call this function with disabled interrupts or from a
> + * hardware interrupt handler or from a bottom half handler.
> + */
> +int smp_call_function(void (*func)(void *), void *info, int natomic, int wait)
> +{
> + int ret;
> +
> + preempt_disable();
> + ret = smp_call_function_mask(cpu_online_map, func, info, wait);
> + preempt_enable();
> + return ret;
> +}
> +EXPORT_SYMBOL(smp_call_function);
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists