lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Nov 2022 23:19:47 +0100
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Angus Chen <angus.chen@...uarmicro.com>
Cc:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Ming Lei <ming.lei@...hat.com>,
        Jason Wang <jasowang@...hat.com>
Subject: Re: IRQ affinity problem from virtio_blk

On Tue, Nov 15 2022 at 03:40, Angus Chen wrote:
> Before probe one virtio_blk.
> crash_cts> p *vector_matrix
> $44 = {
>   matrix_bits = 256,
>   alloc_start = 32,
>   alloc_end = 236,
>   alloc_size = 204,
>   global_available = 15354,
>   global_reserved = 154,
>   systembits_inalloc = 3,
>   total_allocated = 411,
>   online_maps = 80,
>   maps = 0x46100,
>   scratch_map = {1160908723191807, 0, 1, 18435222497520517120},
>   system_map = {1125904739729407, 0, 1, 18435221191850459136}
> }
> After probe one virtio_blk.
> crash_cts> p *vector_matrix
> $45 = {
>   matrix_bits = 256,
>   alloc_start = 32,
>   alloc_end = 236,
>   alloc_size = 204,
>   global_available = 15273,
>   global_reserved = 154,
>   systembits_inalloc = 3,
>   total_allocated = 413,
>   online_maps = 80,
>   maps = 0x46100,
>   scratch_map = {25769803776, 0, 0, 14680064},
>   system_map = {1125904739729407, 0, 1, 18435221191850459136}
> }
>
> We can see global_available drop from 15354 to 15273, is 81.
> And the total_allocated increase from 411 to 413. One config irq,and
> one vq irq.

Right. That's perfectly fine. At the point where you looking at it, the
matrix allocator has given out 2 vectors as can be seen via
total_allocated.

But then it also has another 79 vectors put aside for the other queues,
but those queues have not yet requested the interrupts so there is no
allocation yet. But the vectors are guaranteed to be available when
request_irq() for those queues runs, which does the actual allocation.

Btw, you can enable CONFIG_GENERIC_IRQ_DEBUGFS and then look at the
content of /sys/kernel/debug/irq/domain/VECTOR which gives you a very
clear picture of what's going on. No need for gdb.

> It is easy to expend the irq resource ,because virtio_blk device could
> be more than 512.

How so? virtio_blk allocates a config interrupt and one queue interrupt
per CPU. So in your case a total of 81.

How would you exhaust the vector space? Each CPU has about ~200 (in your
case exactly 204) vectors which can be handed out to devices. You'd need
to instantiate about 200 virtio_blk devices to get to the point of
vector exhaustion.

So what are you actually worried about and which problem are you trying
to solve?

Thanks,

        tglx


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ