[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230928100638.42116-1-gongwei833x@gmail.com>
Date: Thu, 28 Sep 2023 18:06:38 +0800
From: Wei Gong <gongwei833x@...il.com>
To: tglx@...utronix.de
Cc: linux-kernel@...r.kernel.org, Wei Gong <gongwei833x@...il.com>
Subject: [PATCH v3] genirq: avoid long loops in handle_edge_irq
When there are a large number of interrupts occurring on the tx
queue(irq smp_affinity=1) of the network card, changing the CPU
affinity of the tx queue (echo 2 > /proc/irq/xx/smp_affinity)
will cause handle_edge_irq to loop for a long time in the
do {} while() loop.
After setting the IRQ CPU affinity, the next interrupt will only
be activated when it arrives. Therefore, the next interrupt will
still be on CPU 0. When a new CPU affinity is activated on CPU 0,
subsequent interrupts will be processed on CPU 1.
cpu 0 cpu 1
- handle_edge_irq
- apic_ack_irq
- irq_do_set_affinity
- handle_edge_irq
- do {
- handle_irq_event
- istate &= ~IRQS_PENDIN
- IRQD_IRQ_INPROGRESS
- spin_unlock()
- spin_lock()
- istate |= IRQS_PENDIN
- handle_irq_event_percpu - mask_ack_irq()
- spin_unlock()
- spin_unlock
} while(IRQS_PENDIN &&
!irq_disable)
Therefore, when determining whether to continue looping, we add a check
to see if the current CPU belongs to the affinity table of the interrupt.
Signed-off-by: Wei Gong <gongwei833x@...il.com>
---
kernel/irq/chip.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index dc94e0bf2c94..a457490bd965 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -831,7 +831,9 @@ void handle_edge_irq(struct irq_desc *desc)
handle_irq_event(desc);
} while ((desc->istate & IRQS_PENDING) &&
- !irqd_irq_disabled(&desc->irq_data));
+ !irqd_irq_disabled(&desc->irq_data) &&
+ cpumask_test_cpu(smp_processor_id(),
+ irq_data_get_effective_affinity_mask(&desc->irq_data)));
out_unlock:
raw_spin_unlock(&desc->lock);
--
2.32.1 (Apple Git-133)
Powered by blists - more mailing lists