[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <52CC4B77.9090401@redhat.com>
Date: Tue, 07 Jan 2014 13:46:15 -0500
From: Prarit Bhargava <prarit@...hat.com>
To: "Luck, Tony" <tony.luck@...el.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andi Kleen <ak@...ux.intel.com>,
Michel Lespinasse <walken@...gle.com>,
Seiji Aguchi <seiji.aguchi@....com>,
"Zhang, Yang Z" <yang.z.zhang@...el.com>,
"Gortmaker, Paul (Wind River)" <paul.gortmaker@...driver.com>,
"Morgan, Janet" <janet.morgan@...el.com>,
Ruiv Wang <ruiv.wang@...il.com>,
"H. Peter Anvin" <hpa@...ux.intel.com>,
"x86@...nel.org" <x86@...nel.org>,
chen gong <gong.chen@...ux.intel.com>
Subject: Re: [PATCH] x86: Add check for number of available vectors before
CPU down [v6]
On 01/07/2014 12:54 PM, Luck, Tony wrote:
> + for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
> + irq = __this_cpu_read(vector_irq[vector]);
> + if (irq >= 0) {
> + desc = irq_to_desc(irq);
> + data = irq_desc_get_irq_data(desc);
> + cpumask_copy(&affinity_new, data->affinity);
> + cpu_clear(this_cpu, affinity_new);
> + /*
> + * The check below determines if this irq requires
> + * an empty vector_irq[irq] entry on an online
> + * cpu.
> + *
> + * The code only counts active non-percpu irqs, and
> + * those irqs that are not linked to on an online cpu.
> + * The first test is trivial, the second is not.
> + *
> + * The second test takes into account the
> + * account that a single irq may be mapped to multiple
> + * cpu's vector_irq[] (for example IOAPIC cluster
> + * mode). In this case we have two
> + * possibilities:
> + *
> + * 1) the resulting affinity mask is empty; that is
> + * this the down'd cpu is the last cpu in the irq's
> + * affinity mask, and
> Code says "||" - so I think comment should say "or".
> + *
> + * 2) the resulting affinity mask is no longer
> + * a subset of the online cpus but the affinity
> + * mask is not zero; that is the down'd cpu is the
> + * last online cpu in a user set affinity mask.
> + *
> + * In both possibilities, we need to remap the irq
> + * to a new vector_irq[].
> + *
> + */
> + if (irq_has_action(irq) && !irqd_is_per_cpu(data) &&
> + (cpumask_empty(&affinity_new) ||
> + !cpumask_subset(&affinity_new, &online_new)))
> + this_count++;
> + }
>
> That's an impressive 6:1 ratio of lines-of-comment to lines-of-code!
Heh -- I couldn't decide if I should keep it all together in one comment or
divide it up. I guess it does look less scary if I divide it up. So how about
(sorry for the cut-and-paste)
for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
irq = __this_cpu_read(vector_irq[vector]);
if (irq >= 0) {
desc = irq_to_desc(irq);
data = irq_desc_get_irq_data(desc);
cpumask_copy(&affinity_new, data->affinity);
cpu_clear(this_cpu, affinity_new);
/* Do not count inactive or per-cpu irqs. */
if (!irq_has_action(irq) || irqd_is_per_cpu(data))
continue;
/*
* A single irq may be mapped to multiple
* cpu's vector_irq[] (for example IOAPIC cluster
* mode). In this case we have two
* possibilities:
*
* 1) the resulting affinity mask is empty; that is
* this the down'd cpu is the last cpu in the irq's
* affinity mask, or
*
* 2) the resulting affinity mask is no longer
* a subset of the online cpus but the affinity
* mask is not zero; that is the down'd cpu is the
* last online cpu in a user set affinity mask.
*/
if (cpumask_empty(&affinity_new) ||
!cpumask_subset(&affinity_new, &online_new))
this_count++;
}
}
Everyone okay with that?
P.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists