lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-e43b3b58548051f8809391eb7bec7a27ed3003ea@git.kernel.org>
Date:   Mon, 9 Oct 2017 04:35:38 -0700
From:   tip-bot for Thomas Gleixner <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     marc.zyngier@....com, linux-kernel@...r.kernel.org, hare@...e.de,
        hpa@...or.com, shivasharan.srikanteshwara@...adcom.com, hch@....de,
        yasu.isimatu@...il.com, mingo@...nel.org, tglx@...utronix.de,
        sumit.saxena@...adcom.com, kashyap.desai@...adcom.com
Subject: [tip:irq/urgent] genirq/cpuhotplug: Enforce affinity setting on
 startup of managed irqs

Commit-ID:  e43b3b58548051f8809391eb7bec7a27ed3003ea
Gitweb:     https://git.kernel.org/tip/e43b3b58548051f8809391eb7bec7a27ed3003ea
Author:     Thomas Gleixner <tglx@...utronix.de>
AuthorDate: Wed, 4 Oct 2017 21:07:38 +0200
Committer:  Thomas Gleixner <tglx@...utronix.de>
CommitDate: Mon, 9 Oct 2017 13:26:48 +0200

genirq/cpuhotplug: Enforce affinity setting on startup of managed irqs

Managed interrupts can end up in a stale state on CPU hotplug. If the
interrupt is not targeting a single CPU, i.e. the affinity mask spawns
multiple CPUs then the following can happen:

After boot:

dstate:   0x01601200
            IRQD_ACTIVATED
            IRQD_IRQ_STARTED
            IRQD_SINGLE_TARGET
            IRQD_AFFINITY_SET
            IRQD_AFFINITY_MANAGED
node:     0
affinity: 24-31
effectiv: 24
pending:  0

After offlining CPU 31 - 24

dstate:   0x01a31000
            IRQD_IRQ_DISABLED
            IRQD_IRQ_MASKED
            IRQD_SINGLE_TARGET
            IRQD_AFFINITY_SET
            IRQD_AFFINITY_MANAGED
            IRQD_MANAGED_SHUTDOWN
node:     0
affinity: 24-31
effectiv: 24
pending:  0

Now CPU 25 gets onlined again, so it should get the effective interrupt
affinity for this interruopt, but due to the x86 interrupt affinity setter
restrictions this ends up after restarting the interrupt with:

dstate:   0x01601300
            IRQD_ACTIVATED
            IRQD_IRQ_STARTED
            IRQD_SINGLE_TARGET
            IRQD_AFFINITY_SET
            IRQD_SETAFFINITY_PENDING
            IRQD_AFFINITY_MANAGED
node:     0
affinity: 24-31
effectiv: 24
pending:  24-31

So the interrupt is still affine to CPU 24, which was the last CPU to go
offline of that affinity set and the move to an online CPU within 24-31,
in this case 25, is pending. This mechanism is x86/ia64 specific as those
architectures cannot move interrupts from thread context and do this when
an interrupt is actually handled. So the move is set to pending.

Whats worse is that offlining CPU 25 again results in:

dstate:   0x01601300
            IRQD_ACTIVATED
            IRQD_IRQ_STARTED
            IRQD_SINGLE_TARGET
            IRQD_AFFINITY_SET
            IRQD_SETAFFINITY_PENDING
            IRQD_AFFINITY_MANAGED
node:     0
affinity: 24-31
effectiv: 24
pending:  24-31

This means the interrupt has not been shut down, because the outgoing CPU
is not in the effective affinity mask, but of course nothing notices that
the effective affinity mask is pointing at an offline CPU.

In the case of restarting a managed interrupt the move restriction does not
apply, so the affinity setting can be made unconditional. This needs to be
done _before_ the interrupt is started up as otherwise the condition for
moving it from thread context would not longer be fulfilled.

With that change applied onlining CPU 25 after offlining 31-24 results in:

dstate:   0x01600200
            IRQD_ACTIVATED
            IRQD_IRQ_STARTED
            IRQD_SINGLE_TARGET
            IRQD_AFFINITY_MANAGED
node:     0
affinity: 24-31
effectiv: 25
pending:  

And after offlining CPU 25:

dstate:   0x01a30000
            IRQD_IRQ_DISABLED
            IRQD_IRQ_MASKED
            IRQD_SINGLE_TARGET
            IRQD_AFFINITY_MANAGED
            IRQD_MANAGED_SHUTDOWN
node:     0
affinity: 24-31
effectiv: 25
pending:  

which is the correct and expected result.

Fixes: 761ea388e8c4 ("genirq: Handle managed irqs gracefully in irq_startup()")
Reported-by: YASUAKI ISHIMATSU <yasu.isimatu@...il.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: axboe@...nel.dk
Cc: linux-scsi@...r.kernel.org
Cc: Sumit Saxena <sumit.saxena@...adcom.com>
Cc: Marc Zyngier <marc.zyngier@....com>
Cc: mpe@...erman.id.au
Cc: Shivasharan Srikanteshwara <shivasharan.srikanteshwara@...adcom.com>
Cc: Kashyap Desai <kashyap.desai@...adcom.com>
Cc: keith.busch@...el.com
Cc: peterz@...radead.org
Cc: Hannes Reinecke <hare@...e.de>
Cc: Christoph Hellwig <hch@....de>
Cc: stable@...r.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1710042208400.2406@nanos

---
 kernel/irq/chip.c   | 2 +-
 kernel/irq/manage.c | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index 6fc89fd..5a2ef92c 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -265,8 +265,8 @@ int irq_startup(struct irq_desc *desc, bool resend, bool force)
 			irq_setup_affinity(desc);
 			break;
 		case IRQ_STARTUP_MANAGED:
+			irq_do_set_affinity(d, aff, false);
 			ret = __irq_startup(desc);
-			irq_set_affinity_locked(d, aff, false);
 			break;
 		case IRQ_STARTUP_ABORT:
 			return 0;
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index ef89f72..4bff6a1 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -188,6 +188,9 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
 	struct irq_chip *chip = irq_data_get_irq_chip(data);
 	int ret;
 
+	if (!chip || !chip->irq_set_affinity)
+		return -EINVAL;
+
 	ret = chip->irq_set_affinity(data, mask, force);
 	switch (ret) {
 	case IRQ_SET_MASK_OK:

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ