[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-44825757a3ee37c030165a94d6b4dd79c564f661@git.kernel.org>
Date: Wed, 5 Aug 2015 15:20:30 -0700
From: tip-bot for Thomas Gleixner <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: jiang.liu@...ux.intel.com, peterz@...radead.org,
rusty@...tcorp.com.au, hpa@...or.com, mingo@...nel.org,
bhelgaas@...gle.com, tglx@...utronix.de,
linux-kernel@...r.kernel.org
Subject: [tip:x86/apic] x86/irq: Get rid of an indentation level
Commit-ID: 44825757a3ee37c030165a94d6b4dd79c564f661
Gitweb: http://git.kernel.org/tip/44825757a3ee37c030165a94d6b4dd79c564f661
Author: Thomas Gleixner <tglx@...utronix.de>
AuthorDate: Sun, 2 Aug 2015 20:38:25 +0000
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitDate: Thu, 6 Aug 2015 00:14:59 +0200
x86/irq: Get rid of an indentation level
Make the code simpler to read.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Jiang Liu <jiang.liu@...ux.intel.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Rusty Russell <rusty@...tcorp.com.au>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>
Link: http://lkml.kernel.org/r/20150802203609.555253675@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
---
arch/x86/kernel/irq.c | 72 +++++++++++++++++++++++++--------------------------
1 file changed, 35 insertions(+), 37 deletions(-)
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 931bdd2..140950f 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -342,47 +342,45 @@ int check_irq_vectors_for_cpu_disable(void)
this_count = 0;
for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
irq = __this_cpu_read(vector_irq[vector]);
- if (irq >= 0) {
- desc = irq_to_desc(irq);
- if (!desc)
- continue;
+ if (irq < 0)
+ continue;
+ desc = irq_to_desc(irq);
+ if (!desc)
+ continue;
- /*
- * Protect against concurrent action removal,
- * affinity changes etc.
- */
- raw_spin_lock(&desc->lock);
- data = irq_desc_get_irq_data(desc);
- cpumask_copy(&affinity_new,
- irq_data_get_affinity_mask(data));
- cpumask_clear_cpu(this_cpu, &affinity_new);
-
- /* Do not count inactive or per-cpu irqs. */
- if (!irq_has_action(irq) || irqd_is_per_cpu(data)) {
- raw_spin_unlock(&desc->lock);
- continue;
- }
+ /*
+ * Protect against concurrent action removal, affinity
+ * changes etc.
+ */
+ raw_spin_lock(&desc->lock);
+ data = irq_desc_get_irq_data(desc);
+ cpumask_copy(&affinity_new, irq_data_get_affinity_mask(data));
+ cpumask_clear_cpu(this_cpu, &affinity_new);
+ /* Do not count inactive or per-cpu irqs. */
+ if (!irq_has_action(irq) || irqd_is_per_cpu(data)) {
raw_spin_unlock(&desc->lock);
- /*
- * A single irq may be mapped to multiple
- * cpu's vector_irq[] (for example IOAPIC cluster
- * mode). In this case we have two
- * possibilities:
- *
- * 1) the resulting affinity mask is empty; that is
- * this the down'd cpu is the last cpu in the irq's
- * affinity mask, or
- *
- * 2) the resulting affinity mask is no longer
- * a subset of the online cpus but the affinity
- * mask is not zero; that is the down'd cpu is the
- * last online cpu in a user set affinity mask.
- */
- if (cpumask_empty(&affinity_new) ||
- !cpumask_subset(&affinity_new, &online_new))
- this_count++;
+ continue;
}
+
+ raw_spin_unlock(&desc->lock);
+ /*
+ * A single irq may be mapped to multiple cpu's
+ * vector_irq[] (for example IOAPIC cluster mode). In
+ * this case we have two possibilities:
+ *
+ * 1) the resulting affinity mask is empty; that is
+ * this the down'd cpu is the last cpu in the irq's
+ * affinity mask, or
+ *
+ * 2) the resulting affinity mask is no longer a
+ * subset of the online cpus but the affinity mask is
+ * not zero; that is the down'd cpu is the last online
+ * cpu in a user set affinity mask.
+ */
+ if (cpumask_empty(&affinity_new) ||
+ !cpumask_subset(&affinity_new, &online_new))
+ this_count++;
}
count = 0;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists