lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH3Oq6SHLWvcdVUFsmPvqNedBC_QQhVSxrvCiyAUL3TgPQn3wQ@mail.gmail.com>
Date:	Thu, 31 May 2012 19:08:17 +0800
From:	Ning Jiang <ning.n.jiang@...il.com>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	linux-kernel@...r.kernel.org
Subject: Make kstat_incr_irqs_this_cpu() called when an irq is actually being handled

The current kstat_incr_irqs_this_cpu() is really confusing to me, so I
made a patch
to count irq number more reliably according to my understanding.
Please share your
comments about it.


>From cf12b680e1363c5c8f99ab6141d832724c8221a6 Mon Sep 17 00:00:00 2001
From: Ning Jiang <ning.n.jiang@...il.com>
Date: Thu, 31 May 2012 17:41:53 +0800
Subject: [PATCH] genirq: Increase irq count when it is actually being handled

kstat_incr_irqs_this_cpu() is supposed to record the irq number
that is actually taken by a certain cpu. But it failed to do so
in its current form.

For level irq, if its disabled or no action available, we'll skip
irq handling. When it is enabled later, the irq will be retriggered
and kstat_incr_irqs_this_cpu() will be counted twice for this
single irq.

For edge irq, we'll not have the same problem as level irq because
kstat_incr_irqs_this_cpu() is placed after status checking for
disable or no action. But it has its own problem. Suppose a second
irq happens when the first is being handled, it will mark itself
pending and quit, then it will be handled in the same loop after the
first one has been handled. Thus kstat_incr_irqs_this_cpu() will
only be counted once for two irqs.

Place kstat_incr_irqs_this_cpu() in handle_irq_event_percpu() will
give us the exact irq number handled by a certain cpu.

Adjust the kstat_incr_irqs_this_cpu() location in handle_nested_irq()
too because nested irq does not use handle_irq_event_percpu() for its
interrupt processing.

Signed-off-by: Ning Jiang <ning.n.jiang@...il.com>
---
 kernel/irq/chip.c   |   15 ++++-----------
 kernel/irq/handle.c |    2 ++
 2 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index eebd6d5..37f18bd 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -272,8 +272,6 @@ void handle_nested_irq(unsigned int irq)

 	raw_spin_lock_irq(&desc->lock);

-	kstat_incr_irqs_this_cpu(irq, desc);
-
 	action = desc->action;
 	if (unlikely(!action || irqd_irq_disabled(&desc->irq_data))) {
 		desc->istate |= IRQS_PENDING;
@@ -283,6 +281,8 @@ void handle_nested_irq(unsigned int irq)
 	irqd_set(&desc->irq_data, IRQD_IRQ_INPROGRESS);
 	raw_spin_unlock_irq(&desc->lock);

+	kstat_incr_irqs_this_cpu(irq, desc);
+
 	action_ret = action->thread_fn(action->irq, action->dev_id);
 	if (!noirqdebug)
 		note_interrupt(irq, desc, action_ret);
@@ -324,7 +324,6 @@ handle_simple_irq(unsigned int irq, struct irq_desc *desc)
 			goto out_unlock;

 	desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
-	kstat_incr_irqs_this_cpu(irq, desc);

 	if (unlikely(!desc->action || irqd_irq_disabled(&desc->irq_data))) {
 		desc->istate |= IRQS_PENDING;
@@ -377,7 +376,6 @@ handle_level_irq(unsigned int irq, struct irq_desc *desc)
 			goto out_unlock;

 	desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
-	kstat_incr_irqs_this_cpu(irq, desc);

 	/*
 	 * If its disabled or no action available
@@ -427,7 +425,6 @@ handle_fasteoi_irq(unsigned int irq, struct irq_desc *desc)
 			goto out;

 	desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
-	kstat_incr_irqs_this_cpu(irq, desc);

 	/*
 	 * If its disabled or no action available
@@ -494,7 +491,6 @@ handle_edge_irq(unsigned int irq, struct irq_desc *desc)
 			goto out_unlock;
 		}
 	}
-	kstat_incr_irqs_this_cpu(irq, desc);

 	/* Start handling the irq */
 	desc->irq_data.chip->irq_ack(&desc->irq_data);
@@ -554,7 +550,6 @@ void handle_edge_eoi_irq(unsigned int irq, struct
irq_desc *desc)
 			goto out_eoi;
 		}
 	}
-	kstat_incr_irqs_this_cpu(irq, desc);

 	do {
 		if (unlikely(!desc->action))
@@ -583,8 +578,6 @@ handle_percpu_irq(unsigned int irq, struct irq_desc *desc)
 {
 	struct irq_chip *chip = irq_desc_get_chip(desc);

-	kstat_incr_irqs_this_cpu(irq, desc);
-
 	if (chip->irq_ack)
 		chip->irq_ack(&desc->irq_data);

@@ -613,11 +606,11 @@ void handle_percpu_devid_irq(unsigned int irq,
struct irq_desc *desc)
 	void *dev_id = __this_cpu_ptr(action->percpu_dev_id);
 	irqreturn_t res;

-	kstat_incr_irqs_this_cpu(irq, desc);
-
 	if (chip->irq_ack)
 		chip->irq_ack(&desc->irq_data);

+	kstat_incr_irqs_this_cpu(irq, desc);
+
 	trace_irq_handler_entry(irq, action);
 	res = action->handler(irq, dev_id);
 	trace_irq_handler_exit(irq, action, res);
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index bdb1803..0f21460 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -135,6 +135,8 @@ handle_irq_event_percpu(struct irq_desc *desc,
struct irqaction *action)
 	irqreturn_t retval = IRQ_NONE;
 	unsigned int random = 0, irq = desc->irq_data.irq;

+	kstat_incr_irqs_this_cpu(irq, desc);
+
 	do {
 		irqreturn_t res;

-- 
1.7.1

Download attachment "0001-genirq-Increase-irq-count-when-it-is-actually-being-.patch" of type "application/octet-stream" (4451 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ