[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221115174152-mutt-send-email-mst@kernel.org>
Date: Tue, 15 Nov 2022 17:44:41 -0500
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Angus Chen <angus.chen@...uarmicro.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Ming Lei <ming.lei@...hat.com>,
Jason Wang <jasowang@...hat.com>
Subject: Re: IRQ affinity problem from virtio_blk
Thanks Thomas, I have a question:
On Tue, Nov 15, 2022 at 11:19:47PM +0100, Thomas Gleixner wrote:
> On Tue, Nov 15 2022 at 03:40, Angus Chen wrote:
> > Before probe one virtio_blk.
> > crash_cts> p *vector_matrix
> > $44 = {
> > matrix_bits = 256,
> > alloc_start = 32,
> > alloc_end = 236,
> > alloc_size = 204,
> > global_available = 15354,
> > global_reserved = 154,
> > systembits_inalloc = 3,
> > total_allocated = 411,
> > online_maps = 80,
> > maps = 0x46100,
> > scratch_map = {1160908723191807, 0, 1, 18435222497520517120},
> > system_map = {1125904739729407, 0, 1, 18435221191850459136}
> > }
> > After probe one virtio_blk.
> > crash_cts> p *vector_matrix
> > $45 = {
> > matrix_bits = 256,
> > alloc_start = 32,
> > alloc_end = 236,
> > alloc_size = 204,
> > global_available = 15273,
> > global_reserved = 154,
> > systembits_inalloc = 3,
> > total_allocated = 413,
> > online_maps = 80,
> > maps = 0x46100,
> > scratch_map = {25769803776, 0, 0, 14680064},
> > system_map = {1125904739729407, 0, 1, 18435221191850459136}
> > }
> >
> > We can see global_available drop from 15354 to 15273, is 81.
> > And the total_allocated increase from 411 to 413. One config irq,and
> > one vq irq.
>
> Right. That's perfectly fine. At the point where you looking at it, the
> matrix allocator has given out 2 vectors as can be seen via
> total_allocated.
>
> But then it also has another 79 vectors put aside for the other queues,
What makes it put these vectors aside? pci_alloc_irq_vectors_affinity ?
> but those queues have not yet requested the interrupts so there is no
> allocation yet. But the vectors are guaranteed to be available when
> request_irq() for those queues runs, which does the actual allocation.
>
> Btw, you can enable CONFIG_GENERIC_IRQ_DEBUGFS and then look at the
> content of /sys/kernel/debug/irq/domain/VECTOR which gives you a very
> clear picture of what's going on. No need for gdb.
>
> > It is easy to expend the irq resource ,because virtio_blk device could
> > be more than 512.
>
> How so? virtio_blk allocates a config interrupt and one queue interrupt
> per CPU. So in your case a total of 81.
>
> How would you exhaust the vector space? Each CPU has about ~200 (in your
> case exactly 204) vectors which can be handed out to devices. You'd need
> to instantiate about 200 virtio_blk devices to get to the point of
> vector exhaustion.
>
> So what are you actually worried about and which problem are you trying
> to solve?
>
> Thanks,
>
> tglx
>
Powered by blists - more mailing lists