[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1801161131160.1823@nanos>
Date: Tue, 16 Jan 2018 11:33:10 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Keith Busch <keith.busch@...el.com>
cc: LKML <linux-kernel@...r.kernel.org>
Subject: Re: [BUG 4.15-rc7] IRQ matrix management errors
On Tue, 16 Jan 2018, Keith Busch wrote:
> This is all way over my head, but the part that obviously shows
> something's gone wrong:
>
> kworker/u674:3-1421 [028] d... 335.307051: irq_matrix_reserve_managed: bit=56 cpu=0 online=1 avl=86 alloc=116 managed=3 online_maps=112 global_avl=22084, global_rsvd=157, total_alloc=570
> kworker/u674:3-1421 [028] d... 335.307053: irq_matrix_remove_managed: bit=56 cpu=0 online=1 avl=87 alloc=116 managed=2 online_maps=112 global_avl=22085, global_rsvd=157, total_alloc=570
> kworker/u674:3-1421 [028] .... 335.307054: vector_reserve_managed: irq=45 ret=-28
> kworker/u674:3-1421 [028] .... 335.307054: vector_setup: irq=45 is_legacy=0 ret=-28
> kworker/u674:3-1421 [028] d... 335.307055: vector_teardown: irq=45 is_managed=1 has_reserved=0
>
> Which leads me to x86_vector_alloc_irqs goto error:
>
> error:
> x86_vector_free_irqs(domain, virq, i + 1);
>
> The last parameter looks weird. It's the nr_irqs, and since we failed and
> bailed, I would think we'd need to subtract 1 rather than add 1. Adding
> 1 would doublely remove the failed one, and remove the next one that
> was never setup, right?
Right. That's fishy. Let me stare at it.
> Or maybe irq_matrix_reserve_managed wasn't expected to fail in the
> first place?
Well, it can faul. I don't know why it fails in that case, but let me look
a bit more.
Thanks,
tglx
Powered by blists - more mailing lists