lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <TY2PR06MB3424CB11DB57CA1FAA16F10D85049@TY2PR06MB3424.apcprd06.prod.outlook.com>
Date:   Tue, 15 Nov 2022 03:40:02 +0000
From:   Angus Chen <angus.chen@...uarmicro.com>
To:     "tglx@...utronix.de" <tglx@...utronix.de>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Ming Lei <ming.lei@...hat.com>,
        Jason Wang <jasowang@...hat.com>
Subject: IRQ affinity problem from virtio_blk 

Hi All.
I test the linux 6.1 and found the virtio_blk use irq_affinity with IRQD_AFFINITY_MANAGED.
The machine has 80 cpus with two numa node.

Before probe one virtio_blk.
crash_cts> p *vector_matrix
$44 = {
  matrix_bits = 256,
  alloc_start = 32,
  alloc_end = 236,
  alloc_size = 204,
  global_available = 15354,
  global_reserved = 154,
  systembits_inalloc = 3,
  total_allocated = 411,
  online_maps = 80,
  maps = 0x46100,
  scratch_map = {1160908723191807, 0, 1, 18435222497520517120},
  system_map = {1125904739729407, 0, 1, 18435221191850459136}
}
After probe one virtio_blk.
crash_cts> p *vector_matrix
$45 = {
  matrix_bits = 256,
  alloc_start = 32,
  alloc_end = 236,
  alloc_size = 204,
  global_available = 15273,
  global_reserved = 154,
  systembits_inalloc = 3,
  total_allocated = 413,
  online_maps = 80,
  maps = 0x46100,
  scratch_map = {25769803776, 0, 0, 14680064},
  system_map = {1125904739729407, 0, 1, 18435221191850459136}
}

We can see global_available drop from 15354 to 15273, is 81.
And the total_allocated increase from 411 to 413. One config irq,and one vq irq.

It is easy to expend the irq resource ,because virtio_blk device could be more than 512.
And I read the matrix code of irq,with IRQD_AFFINITY_MANAGED be set ,it is a kind of feature.

If we cosume irq exhausted,it will break per_vq_vectors ,so the ' virtblk_map_queues ' will
Fall back to blk_mq_map_queues finally.

Or if we don’t cosume irq exhausted,we just use irq bits of one cpu more than others for example,
IRQD_AFFINITY_MANAGED will fail too,because it not balance.

I'm not a native English speaker, any suggestion will be appreciated.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ