lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <5269ecde-be8e-4920-a76f-882da1475d5d@huawei.com>
Date: Mon, 13 Oct 2025 10:56:20 +0800
From: Jinqian Yang <yangjinqian1@...wei.com>
To: <linux-arm-kernel@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
	Marc Zyngier <maz@...nel.org>, Alex Williamson <alex.williamson@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>, Zenghui Yu <yuzenghui@...wei.com>
CC: jiangkunkun <jiangkunkun@...wei.com>, Zhou Wang <wangzhou1@...ilicon.com>,
	liuyonglong <liuyonglong@...wei.com>
Subject: [Question] QEMU VM fails to restart repeatedly with VFIO passthrough
 on GICv4.1

Hi, all

On a GICv4.1 environment running kernel 6.16, when launching VMs with
QEMU and passing through VF devices, after repeatedly booting and
killing the VMs hundreds of times, the host reports call traces and the
VMs become unresponsive. The call traces show VFIO call stacks.

[14201.974880] BUG: Bad page map in process qemu-system-aar 
pte:fefefefefefefefe pmd:8000820b1ba0403
[14201.974895] addr:0000fffdd7400000 vm_flags:80240644bb 
anon_vma:0000000000000000 mapping:ffff08208e9b7758 index:401eed6a
[14201.974905] file:[vfio-device] fault:vfio_pci_mmap_page_fault 
[vfio_pci_core] mmap:vfio_device_fops_mmap [vfio] mmap_prepare: 0x0 
read_folio:0x0
[14201.974923] CPU: 2 UID: 0 PID: 50408 Comm: qemu-system-aar Kdump: 
loaded Tainted: G           O        6.16.0-rc4+ #1 PREEMPT
[14201.974926] Tainted: [O]=OOT_MODULE
[14201.974927] Hardware name: To be filled by O.E.M. To be filled by 
O.E.M./To be filled by O.E.M., BIOS HixxxxEVB V3.4.7 09/04/2025
[14201.974928] Call trace:
[14201.974929]  show_stack+0x20/0x38 (C)
[14201.974934]  dump_stack_lvl+0x80/0xf8
[14201.974938]  dump_stack+0x18/0x28
[14201.974940]  print_bad_pte+0x138/0x1d8
[14201.974943]  vm_normal_page+0xa4/0xd0
[14201.974945]  unmap_page_range+0x648/0x1110
[14201.974947]  unmap_single_vma.constprop.0+0x90/0x118
[14201.974948]  zap_page_range_single_batched+0xbc/0x180
[14201.974950]  zap_page_range_single+0x60/0xa0
[14201.974952]  unmap_mapping_range+0x114/0x140
[14201.974953]  vfio_pci_zap_and_down_write_memory_lock+0x3c/0x58 
[vfio_pci_core]
[14201.974957]  vfio_basic_config_write+0x214/0x2d8 [vfio_pci_core]
[14201.974959]  vfio_pci_config_rw+0x1d8/0x1290 [vfio_pci_core]
[14201.974962]  vfio_pci_rw+0x118/0x200 [vfio_pci_core]
[14201.974965]  vfio_pci_core_write+0x28/0x40 [vfio_pci_core]
[14201.974968]  vfio_device_fops_write+0x3c/0x58 [vfio]
[14201.974971]  vfs_write+0xd8/0x400
[14201.974973]  __arm64_sys_pwrite64+0xac/0xe0
[14201.974974]  invoke_syscall+0x50/0x120
[14201.974976]  el0_svc_common.constprop.0+0xc8/0xf0
[14201.974978]  do_el0_svc+0x24/0x38
[14201.974979]  el0_svc+0x38/0x130
[14201.974982]  el0t_64_sync_handler+0xc8/0xd0
[14201.974984]  el0t_64_sync+0x1ac/0x1b0
[14201.975025] Disabling lock debugging due to kernel taint

This value (0xfefefefefefefefe) is very special - it's a "poison" value.
QEMU or the VFIO driver may have attempted to access or manipulate a
page that has already been freed.

Thanks in advance for any insights!
Jinqian



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ