[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <868qhfxofm.wl-maz@kernel.org>
Date: Mon, 13 Oct 2025 08:15:25 +0100
From: Marc Zyngier <maz@...nel.org>
To: Jinqian Yang <yangjinqian1@...wei.com>
Cc: <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>,
Alex Williamson <alex.williamson@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Zenghui Yu <yuzenghui@...wei.com>,
jiangkunkun <jiangkunkun@...wei.com>,
Zhou Wang <wangzhou1@...ilicon.com>,
liuyonglong <liuyonglong@...wei.com>
Subject: Re: [Question] QEMU VM fails to restart repeatedly with VFIO passthrough on GICv4.1
On Mon, 13 Oct 2025 03:56:20 +0100,
Jinqian Yang <yangjinqian1@...wei.com> wrote:
>
> Hi, all
>
> On a GICv4.1 environment running kernel 6.16, when launching VMs with
> QEMU and passing through VF devices, after repeatedly booting and
> killing the VMs hundreds of times, the host reports call traces and the
> VMs become unresponsive. The call traces show VFIO call stacks.
>
> [14201.974880] BUG: Bad page map in process qemu-system-aar
> pte:fefefefefefefefe pmd:8000820b1ba0403
> [14201.974895] addr:0000fffdd7400000 vm_flags:80240644bb
> anon_vma:0000000000000000 mapping:ffff08208e9b7758 index:401eed6a
> [14201.974905] file:[vfio-device] fault:vfio_pci_mmap_page_fault
> [vfio_pci_core] mmap:vfio_device_fops_mmap [vfio] mmap_prepare: 0x0
> read_folio:0x0
> [14201.974923] CPU: 2 UID: 0 PID: 50408 Comm: qemu-system-aar Kdump:
> loaded Tainted: G O 6.16.0-rc4+ #1 PREEMPT
> [14201.974926] Tainted: [O]=OOT_MODULE
> [14201.974927] Hardware name: To be filled by O.E.M. To be filled by
> O.E.M./To be filled by O.E.M., BIOS HixxxxEVB V3.4.7 09/04/2025
> [14201.974928] Call trace:
> [14201.974929] show_stack+0x20/0x38 (C)
> [14201.974934] dump_stack_lvl+0x80/0xf8
> [14201.974938] dump_stack+0x18/0x28
> [14201.974940] print_bad_pte+0x138/0x1d8
> [14201.974943] vm_normal_page+0xa4/0xd0
> [14201.974945] unmap_page_range+0x648/0x1110
> [14201.974947] unmap_single_vma.constprop.0+0x90/0x118
> [14201.974948] zap_page_range_single_batched+0xbc/0x180
> [14201.974950] zap_page_range_single+0x60/0xa0
> [14201.974952] unmap_mapping_range+0x114/0x140
> [14201.974953] vfio_pci_zap_and_down_write_memory_lock+0x3c/0x58
> [vfio_pci_core]
> [14201.974957] vfio_basic_config_write+0x214/0x2d8 [vfio_pci_core]
> [14201.974959] vfio_pci_config_rw+0x1d8/0x1290 [vfio_pci_core]
> [14201.974962] vfio_pci_rw+0x118/0x200 [vfio_pci_core]
> [14201.974965] vfio_pci_core_write+0x28/0x40 [vfio_pci_core]
> [14201.974968] vfio_device_fops_write+0x3c/0x58 [vfio]
> [14201.974971] vfs_write+0xd8/0x400
> [14201.974973] __arm64_sys_pwrite64+0xac/0xe0
> [14201.974974] invoke_syscall+0x50/0x120
> [14201.974976] el0_svc_common.constprop.0+0xc8/0xf0
> [14201.974978] do_el0_svc+0x24/0x38
> [14201.974979] el0_svc+0x38/0x130
> [14201.974982] el0t_64_sync_handler+0xc8/0xd0
> [14201.974984] el0t_64_sync+0x1ac/0x1b0
> [14201.975025] Disabling lock debugging due to kernel taint
>
> This value (0xfefefefefefefefe) is very special - it's a "poison" value.
> QEMU or the VFIO driver may have attempted to access or manipulate a
> page that has already been freed.
>
> Thanks in advance for any insights!
I have no insight whatsoever, but there is very little in this report
to go on. So here are the questions you should ask yourself:
- How specific is this to GICv4.1?
- Does it stop triggering if you disable direct injection?
- What makes you think this value is explicitly a poison value rather
than some other data?
- Who writes this "poison" data?
- Does it reproduce on 6.17 rather than a dodgy 6.16-rc4?
- What operation was QEMU performing on the device when this happens?
- Using what devices passed to the guest?
- What do the usual debug options (KASAN, lockdep) report?
- What is so specific about this HW?
- What is this out-of-tree module?
- Have you tried without it?
These are the questions I'd ask myself before even posting something,
because each and every one of them is relevant. There are probably
more, but once you have answered these question, you should be able to
figure out what the gaps are in your understanding of the problem.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists