lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h5spk01t.ffs@tglx>
Date: Tue, 13 Jan 2026 16:24:46 +0100
From: Thomas Gleixner <tglx@...nel.org>
To: Bert Karwatzki <spasswolf@....de>, linux-kernel@...r.kernel.org
Cc: Bert Karwatzki <spasswolf@....de>, linux-next@...r.kernel.org, Mario
 Limonciello <mario.limonciello@....com>, Sebastian Andrzej Siewior
 <bigeasy@...utronix.de>, Clark Williams <clrkwllms@...nel.org>, Steven
 Rostedt <rostedt@...dmis.org>, Christian König
 <christian.koenig@....com>,
 regressions@...ts.linux.dev, linux-pci@...r.kernel.org,
 linux-acpi@...r.kernel.org, "Rafael J . Wysocki"
 <rafael.j.wysocki@...el.com>, acpica-devel@...ts.linux.dev, Robert Moore
 <robert.moore@...el.com>, Saket Dumbre <saket.dumbre@...el.com>, Bjorn
 Helgaas <bhelgaas@...gle.com>, Clemens Ladisch <clemens@...isch.de>,
 Jinchao Wang <wangjinchao600@...il.com>, Yury Norov
 <yury.norov@...il.com>, Anna Schumaker <anna.schumaker@...cle.com>,
 Baoquan He <bhe@...hat.com>, "Darrick J. Wong" <djwong@...nel.org>, Dave
 Young <dyoung@...hat.com>, Doug Anderson <dianders@...omium.org>,
 "Guilherme G. Piccoli" <gpiccoli@...lia.com>, Helge Deller
 <deller@....de>, Ingo Molnar <mingo@...nel.org>, Jason Gunthorpe
 <jgg@...pe.ca>, Joanthan Cameron <Jonathan.Cameron@...wei.com>, Joel
 Granados <joel.granados@...nel.org>, John Ogness
 <john.ogness@...utronix.de>, Kees Cook <kees@...nel.org>, Li Huafei
 <lihuafei1@...wei.com>, "Luck, Tony" <tony.luck@...el.com>, Luo Gengkun
 <luogengkun@...weicloud.com>, Max Kellermann <max.kellermann@...os.com>,
 Nam Cao <namcao@...utronix.de>, oushixiong <oushixiong@...inos.cn>, Petr
 Mladek <pmladek@...e.com>, Qianqiang Liu <qianqiang.liu@....com>, Sergey
 Senozhatsky <senozhatsky@...omium.org>, Sohil Mehta
 <sohil.mehta@...el.com>, Tejun Heo <tj@...nel.org>, Thomas Zimemrmann
 <tzimmermann@...e.de>, Thorsten Blum <thorsten.blum@...ux.dev>, Ville
 Syrjala <ville.syrjala@...ux.intel.com>, Vivek Goyal <vgoyal@...hat.com>,
 Yicong Yang <yangyicong@...ilicon.com>, Yunhui Cui
 <cuiyunhui@...edance.com>, Andrew Morton <akpm@...ux-foundation.org>,
 W_Armin@....de
Subject: Re: NMI stack overflow during resume of PCIe bridge with
 CONFIG_HARDLOCKUP_DETECTOR=y

On Tue, Jan 13 2026 at 10:41, Bert Karwatzki wrote:
> Here's the result in case of the crash:
> 2026-01-12T04:24:36.809904+01:00 T1510;acpi_ex_system_memory_space_handler 255: logical_addr_ptr = ffffc066977b3000
> 2026-01-12T04:24:36.846170+01:00 C14;exc_nmi: 0

Here the NMI triggers in non-task context on CPU14

> 2026-01-12T04:24:36.960760+01:00 C14;exc_nmi: 10.3
> 2026-01-12T04:24:36.960760+01:00 C14;default_do_nmi 
> 2026-01-12T04:24:36.960760+01:00 C14;nmi_handle: type=0x0
> 2026-01-12T04:24:36.960760+01:00 C14;nmi_handle: a=0xffffffffa1612de0
> 2026-01-12T04:24:36.960760+01:00 C14;nmi_handle: a->handler=perf_event_nmi_handler+0x0/0xa6
> 2026-01-12T04:24:36.960760+01:00 C14;perf_event_nmi_handler: 0
> 2026-01-12T04:24:36.960760+01:00 C14;perf_event_nmi_handler: 1
> 2026-01-12T04:24:36.960760+01:00 C14;perf_event_nmi_handler: 2
> 2026-01-12T04:24:36.960760+01:00 C14;x86_pmu_handle_irq: 2
> 2026-01-12T04:24:36.960760+01:00 C14;x86_pmu_handle_irq: 2.6
> 2026-01-12T04:24:36.960760+01:00 C14;__perf_event_overflow: 0
> 2026-01-12T04:24:36.960760+01:00 C14;__perf_event_overflow: 6.99: overflow_handler=watchdog_overflow_callback+0x0/0x10d
> 2026-01-12T04:24:36.960760+01:00 C14;watchdog_overflow_callback: 0
> 2026-01-12T04:24:36.960760+01:00 C14;__ktime_get_fast_ns_debug: 0.1
> 2026-01-12T04:24:36.960760+01:00 C14;tk_clock_read_debug: read=read_hpet+0x0/0xf0
> 2026-01-12T04:24:36.960760+01:00 C14;read_hpet: 0
> 2026-01-12T04:24:36.960760+01:00 C14;read_hpet: 0.1

> 2026-01-12T04:24:36.960760+01:00 T0;exc_nmi: 0

This one triggers in task context of PID0, aka idle task, but it's not
clear on which CPU that happens. It's probably CPU13 as that continues
with the expected 10.3 output, but that's almost ~1.71 seconds later.

> 2026-01-12T04:24:38.674625+01:00 C13;exc_nmi: 10.3
> 2026-01-12T04:24:38.674625+01:00 C13;default_do_nmi 
> 2026-01-12T04:24:38.674625+01:00 C13;nmi_handle: type=0x0
> 2026-01-12T04:24:38.674625+01:00 C13;nmi_handle: a=0xffffffffa1612de0
> 2026-01-12T04:24:38.674625+01:00 C13;nmi_handle: a->handler=perf_event_nmi_handler+0x0/0xa6
> 2026-01-12T04:24:38.674625+01:00 C13;perf_event_nmi_handler: 0
> 2026-01-12T04:24:38.674625+01:00 C13;perf_event_nmi_handler: 1
> 2026-01-12T04:24:38.674625+01:00 C13;perf_event_nmi_handler: 2
> 2026-01-12T04:24:38.674625+01:00 C13;x86_pmu_handle_irq: 2
> 2026-01-12T04:24:38.674625+01:00 C13;x86_pmu_handle_irq: 2.6
> 2026-01-12T04:24:38.674625+01:00 C13;__perf_event_overflow: 0
> 2026-01-12T04:24:38.674625+01:00 C13;__perf_event_overflow: 6.99: overflow_handler=watchdog_overflow_callback+0x0/0x10d
> 2026-01-12T04:24:38.674625+01:00 C13;watchdog_overflow_callback: 0
> 2026-01-12T04:24:38.674625+01:00 C13;__ktime_get_fast_ns_debug: 0.1
> 2026-01-12T04:24:38.674625+01:00 C13;tk_clock_read_debug: read=read_hpet+0x0/0xf0
> 2026-01-12T04:24:38.674625+01:00 C13;read_hpet: 0
> 2026-01-12T04:24:38.674625+01:00 C13;read_hpet: 0.1

> 2026-01-12T04:24:38.674625+01:00 T0;exc_nmi: 0

Same picture as above, but this time on CPU2 with a delay of 0.68
seconds

> 2026-01-12T04:24:39.355101+01:00 C2;exc_nmi: 10.3
> 2026-01-12T04:24:39.355101+01:00 C2;default_do_nmi 
> 2026-01-12T04:24:39.355101+01:00 C2;nmi_handle: type=0x0
> 2026-01-12T04:24:39.355101+01:00 C2;nmi_handle: a=0xffffffffa1612de0
> 2026-01-12T04:24:39.355101+01:00 C2;nmi_handle: a->handler=perf_event_nmi_handler+0x0/0xa6
> 2026-01-12T04:24:39.355101+01:00 C2;perf_event_nmi_handler: 0
> 2026-01-12T04:24:39.355101+01:00 C2;perf_event_nmi_handler: 1
> 2026-01-12T04:24:39.355101+01:00 C2;perf_event_nmi_handler: 2
> 2026-01-12T04:24:39.355101+01:00 C2;x86_pmu_handle_irq: 2
> 2026-01-12T04:24:39.355101+01:00 C2;x86_pmu_handle_irq: 2.6
> 2026-01-12T04:24:39.355101+01:00 C2;__perf_event_overflow: 0
> 2026-01-12T04:24:39.355101+01:00 C2;__perf_event_overflow: 6.99: overflow_handler=watchdog_overflow_callback+0x0/0x10d
> 2026-01-12T04:24:39.355101+01:00 C2;watchdog_overflow_callback: 0
> 2026-01-12T04:24:39.355101+01:00 C2;__ktime_get_fast_ns_debug: 0.1
> 2026-01-12T04:24:39.355101+01:00 C2;tk_clock_read_debug: read=read_hpet+0x0/0xf0
> 2026-01-12T04:24:39.355101+01:00 C2;read_hpet: 0
> 2026-01-12T04:24:39.355101+01:00 C2;read_hpet: 0.1

> 2026-01-12T04:24:39.355101+01:00 T0;exc_nmi: 0

Again on CPU0 with a delay of 0.06 seconds

> 2026-01-12T04:24:39.410207+01:00 C0;exc_nmi: 10.3
> 2026-01-12T04:24:39.410207+01:00 C0;default_do_nmi 
> 2026-01-12T04:24:39.410207+01:00 C0;nmi_handle: type=0x0
> 2026-01-12T04:24:39.410207+01:00 C0;nmi_handle: a=0xffffffffa1612de0
> 2026-01-12T04:24:39.410207+01:00 C0;nmi_handle: a->handler=perf_event_nmi_handler+0x0/0xa6
> 2026-01-12T04:24:39.410207+01:00 C0;perf_event_nmi_handler: 0
> 2026-01-12T04:24:39.410207+01:00 C0;perf_event_nmi_handler: 1
> 2026-01-12T04:24:39.410207+01:00 C0;perf_event_nmi_handler: 2
> 2026-01-12T04:24:39.410207+01:00 C0;x86_pmu_handle_irq: 2
> 2026-01-12T04:24:39.410207+01:00 C0;x86_pmu_handle_irq: 2.6
> 2026-01-12T04:24:39.410207+01:00 C0;__perf_event_overflow: 0
> 2026-01-12T04:24:39.410207+01:00 C0;__perf_event_overflow: 6.99: overflow_handler=watchdog_overflow_callback+0x0/0x10d
> 2026-01-12T04:24:39.410207+01:00 C0;watchdog_overflow_callback: 0
> 2026-01-12T04:24:39.410207+01:00 C0;__ktime_get_fast_ns_debug: 0.1
> 2026-01-12T04:24:39.410207+01:00 C0;tk_clock_read_debug: read=read_hpet+0x0/0xf0
> 2026-01-12T04:24:39.410207+01:00 C0;read_hpet: 0
> 2026-01-12T04:24:39.410207+01:00 C0;read_hpet: 0.1

> 2026-01-12T04:24:39.410207+01:00 T0;exc_nmi: 0

....

> In the case of the crash the interrupt handler never returns because when accessing
> the HPET another NMI is triggered. This goes on until a crash happens, probably because
> of stack overflow.

No. NMI nesting is only one level deep and immediately returns:

        if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) {
		this_cpu_write(nmi_state, NMI_LATCHED);
		return;
	}


So it's not a stack overflow. What's more likely is that after a while
_ALL_ CPUs are hung up in the NMI handler after they tripped over the
HPET read.

> The behaviour described here seems to be similar to the bug that commit
> 3d5f4f15b778 ("watchdog: skip checks when panic is in progress") is fixing, but
> this is actually a different bug as kernel 6.18 (which contains 3d5f4f15b778)
> is also affected (I've conducted 5 tests with 6.18 so far and got 4 crashes (crashes occured
> after (0.5h, 1h, 4.5h, 1.5h) of testing)). 
> Nevertheless these look similar enough to CC the involved people.

There is nothing similar.

Your problem originates from a screwed up hardware state which in turn
causes the HPET to go haywire for unknown reasons.

What is the physical address of this ACPI handler access:

       logical_addr_ptr = ffffc066977b3000

along with the full output of /proc/iomem

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ