[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180116071145.GA5643@localhost.localdomain>
Date: Tue, 16 Jan 2018 00:11:45 -0700
From: Keith Busch <keith.busch@...el.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: Re: [BUG 4.15-rc7] IRQ matrix management errors
This is all way over my head, but the part that obviously shows
something's gone wrong:
kworker/u674:3-1421 [028] d... 335.307051: irq_matrix_reserve_managed: bit=56 cpu=0 online=1 avl=86 alloc=116 managed=3 online_maps=112 global_avl=22084, global_rsvd=157, total_alloc=570
kworker/u674:3-1421 [028] d... 335.307053: irq_matrix_remove_managed: bit=56 cpu=0 online=1 avl=87 alloc=116 managed=2 online_maps=112 global_avl=22085, global_rsvd=157, total_alloc=570
kworker/u674:3-1421 [028] .... 335.307054: vector_reserve_managed: irq=45 ret=-28
kworker/u674:3-1421 [028] .... 335.307054: vector_setup: irq=45 is_legacy=0 ret=-28
kworker/u674:3-1421 [028] d... 335.307055: vector_teardown: irq=45 is_managed=1 has_reserved=0
Which leads me to x86_vector_alloc_irqs goto error:
error:
x86_vector_free_irqs(domain, virq, i + 1);
The last parameter looks weird. It's the nr_irqs, and since we failed and
bailed, I would think we'd need to subtract 1 rather than add 1. Adding
1 would doublely remove the failed one, and remove the next one that
was never setup, right?
Or maybe irq_matrix_reserve_managed wasn't expected to fail in the
first place?
Powered by blists - more mailing lists