lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <72dc3881-2056-48b6-a710-f497d614dd53@gmail.com>
Date: Wed, 23 Apr 2025 22:19:17 +0800
From: Ethan Zhao <etzhao1900@...il.com>
To: Bjorn Helgaas <helgaas@...nel.org>, Thomas Gleixner <tglx@...utronix.de>
Cc: Jiri Slaby <jirislaby@...nel.org>, linux-kernel@...r.kernel.org,
 linux-pci@...r.kernel.org
Subject: Re: IRQ domain logging?



On 4/23/2025 5:07 AM, Bjorn Helgaas wrote:
> Hi Thomas,
> 
> IRQ domains and IRQs are critical infrastructure, but we don't really
> log anything when we discover controllers or set them up.  Do you
> think there would be any value in exposing some of this structure in
> dmesg to help people (like me!) understand how these are connected to
> devices and drivers?
> 
> For example, in a simple qemu system:
> 
>    IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
>    ACPI: Using IOAPIC for interrupt routing
>    ACPI: PCI: Interrupt link LNKA configured for IRQ 10
>    ACPI: PCI: Interrupt link GSIA configured for IRQ 16
>    hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
>    ACPI: \_SB_.GSIA: Enabled at IRQ 16
>    pcieport 0000:00:1c.0: PME: Signaling with IRQ 24
>    00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
>    ata1: SATA max UDMA/133 abar m4096@...eadb000 port 0xfeadb100 irq 28 lpm-pol 0
> 
> I think these are all wired interrupts, and maybe IRQ==GSI (?), and I
> think the ACPI link devices are configurable connections between an
> INTx and the IOAPIC, but it's kind of hard to connect them all
> together.
> 
> This from an arm64 system is even more obscure to me:
> 
>    NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
>    GICv3: 256 SPIs implemented
>    Root IRQ handler: gic_handle_irq
>    GICv3: GICv3 features: 16 PPIs
>    kvm [1]: vgic interrupt IRQ18
>    xhci-hcd xhci-hcd.0.auto: irq 67, io mem 0xfe800000
> 
> I have no clue where irq 67 goes.
> 
> Maybe there's no useful way to log anything here, I dunno; it just
> occurred to me when looking at Jiri's series to reduce the number of
> irqdomain interfaces.  PCI controller drivers do a lot of interrupt
> domain setup, and if that were more visible/concrete in dmesg, I think
> I might understand it better. 
The current visibility into interrupt routing in systems is fragmented, 
making it challenging to observe the routing behavior of specific 
interrupts or interrupt types. For enthusiasts exploring system 
internals, having a ​traceroute-like tool to map interrupt handling 
paths would significantly enhance transparency and debugging capabilities.

e.g. How an MSI is routed and remapped between different domains on x86

MSI pci-dev-->ioapic-->iommu-->apic-->cpu

But so far it seems there is no enough info from KABI(/proc /sysfs 
syscall, dmesg etc) to compose a complete chain like that ?

Thanks,
Ethan
> 
> Bjorn
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ