lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <27175b8e-144d-42cb-b149-04031e9aa698@linux.intel.com>
Date: Thu, 24 Apr 2025 16:53:11 +0800
From: "Mi, Dapeng" <dapeng1.mi@...ux.intel.com>
To: Seth Forshee <sforshee@...nel.org>, Peter Zijlstra
 <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
 Arnaldo Carvalho de Melo <acme@...nel.org>,
 Namhyung Kim <namhyung@...nel.org>, Thomas Gleixner <tglx@...utronix.de>,
 Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
 Sean Christopherson <seanjc@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>
Cc: x86@...nel.org, linux-perf-users@...r.kernel.org, kvm@...r.kernel.org,
 linux-kernel@...r.kernel.org
Subject: Re: kvm guests crash when running "perf kvm top"

Is the command "perf kvm top" executed in host or guest when you see guest
crash? Is it easy to be reproduced? Could you please provide the detailed
steps to reproduce the issue with 6.15-rc1 kernel?


On 4/9/2025 12:54 AM, Seth Forshee wrote:
> A colleague of mine reported kvm guest hangs when running "perf kvm top"
> with a 6.1 kernel. Initially it looked like the problem might be fixed
> in newer kernels, but it turned out to be perf changes which must avoid
> triggering the issue. I was able to reproduce the guest crashes with
> 6.15-rc1 in both the host and the guest when using an older version of
> perf. A bisect of perf landed on 7b100989b4f6 "perf evlist: Remove
> __evlist__add_default", but this doesn't look to be fixing any kind of
> issue like this.
>
> This box has an Ice Lake CPU, and we can reproduce on other Ice Lakes
> but could not reproduce on another box with Broadwell. On Broadwell
> guests would crash with older kernels in the host, but this was fixed by
> 971079464001 "KVM: x86/pmu: fix masking logic for
> MSR_CORE_PERF_GLOBAL_CTRL". That does not fix the issues we see on Ice
> Lake.
>
> When the guests crash we aren't getting any output on the serial
> console, but I got this from a memory dump:
>
> BUG: unable to handle page fault for address: fffffe76ffbaf00000
> BUG: unable to handle page fault for address: fffffe76ffbaf00000
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> BUG: unable to handle page fault for address: fffffe76ffbaf00000
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 2e044067 P4D 3ec42067 PUD 3ec41067 PMD 3ec40067 PTE ffffffffff120
> Oops: Oops: 0002 [#1] SMP NOPTI
> BUG: unable to handle page fault for address: fffffe76ffbaf00000
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 2e044067 P4D 3ec42067 PUD 3ec41067 PMD 3ec40067 PTE ffffffffff120
> Oops: Oops: 0002 [#2] SMP NOPTI
> CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.15.0-rc1 #3 VOLUNTARY
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009)/Incus, BIOS unknown 02/02/2022
> BUG: unable to handle page fault for address: fffffe76ffbaf00000
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 2e044067 P4D 3ec42067 PUD 3ec41067 PMD 3ec40067 PTE ffffffffff120
> Oops: Oops: 0002 [#3] SMP NOPTI
> CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.15.0-rc1 #3 VOLUNTARY
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009)/Incus, BIOS unknown 02/02/2022
>
> We got something different though from an ubuntu VM running their 6.8
> kernel:
>
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 10336a067 P4D 0 
> Oops: 0000 [#1] PREEMPT SMP NOPTI
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 10336a067 P4D 0 
> Oops: 0000 [#2] PREEMPT SMP NOPTI
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 10336a067 P4D 0 
> Oops: 0000 [#3] PREEMPT SMP NOPTI
> CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.8.0-56-generic #58-Ubuntu
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 10336a067 P4D 0 
> Oops: 0000 [#4] PREEMPT SMP NOPTI
> CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.8.0-56-generic #58-Ubuntu
> BUG: kernel NULL pointer dereference, address: 000000000000002828
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 10336a067 P4D 0 
> Oops: 0000 [#5] PREEMPT SMP NOPTI
> CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.8.0-56-generic #58-Ubuntu
> RIP: 0010:__sprint_symbol.isra.0+0x6/0x120
> Code: ff e8 0e 9d 00 01 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 55 <48> 89 e5 41 57 49 89 f7 41 56 4c 63 f2 4c 8d 45 b8 48 8d 55 c0 41
> RSP: 0018:ff25e52d000e6ff8 EFLAGS: 00000046
> BUG: #DF stack guard page was hit at 0000000040b441e1 (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
> BUG: #DF stack guard page was hit at 000000002fed44fb (stack is 00000000a1788787..000000008e7f4216)
>
> CPU information from one of the boxes where we see this:
>
> processor	: 0
> vendor_id	: GenuineIntel
> cpu family	: 6
> model		: 106
> model name	: Intel(R) Xeon(R) Gold 5318Y CPU @ 2.10GHz
> stepping	: 6
> microcode	: 0xd0003f5
> cpu MHz		: 800.000
> cache size	: 36864 KB
> physical id	: 0
> siblings	: 44
> core id		: 0
> cpu cores	: 22
> apicid		: 0
> initial apicid	: 0
> fpu		: yes
> fpu_exception	: yes
> cpuid level	: 27
> wp		: yes
> flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
> vmx flags	: vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb ept_5level flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs pml ept_violation_ve ept_mode_based_exec tsc_scaling
> bugs		: spectre_v1 spectre_v2 spec_store_bypass swapgs mmio_stale_data eibrs_pbrsb gds bhi spectre_v2_user
> bogomips	: 4000.00
> clflush size	: 64
> cache_alignment	: 64
> address sizes	: 46 bits physical, 57 bits virtual
> power management:
>
> Let me know if I can provide any additional information or testing.
>
> Thanks,
> Seth
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ