lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210423032514.GB13944@xsang-OptiPlex-9020>
Date:   Fri, 23 Apr 2021 11:25:14 +0800
From:   kernel test robot <oliver.sang@...el.com>
To:     Vitaly Kuznetsov <vkuznets@...hat.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
        lkp@...el.com, xudong.hao@...el.com
Subject: [KVM]  4fc096a99e: kernel-selftests.kvm.set_memory_region_test.fail



Greeting,

FYI, we noticed the following commit (built with gcc-9):

commit: 4fc096a99e01dd06dc55bef76ade7f8d76653245 ("KVM: Raise the maximum number of user memslots")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master


in testcase: kernel-selftests
version: kernel-selftests-x86_64-cf9ae1bd-1_20210401
with following parameters:

	group: kvm
	ucode: 0x28

test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt


on test machine: 8 threads Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 16G memory

caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):




If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang@...el.com>

KERNEL SELFTESTS: linux_headers_dir is /usr/src/linux-headers-x86_64-rhel-8.3-kselftests-4fc096a99e01dd06dc55bef76ade7f8d76653245
2021-04-22 20:27:10 ln -sf /usr/bin/clang
2021-04-22 20:27:10 ln -sf /usr/bin/llc
2021-04-22 20:27:10 sed -i s/default_timeout=45/default_timeout=300/ kselftest/runner.sh
LKP WARN miss config CONFIG_KVM_AMD= of kvm/config
source /lkp/lkp/src/lib/tests/kernel-selftests-ext.sh
2021-04-22 20:27:10 cp kvm/settings /kselftests/kvm/settings
2021-04-22 20:27:10 /kselftests/run_kselftest.sh -c kvm
TAP version 13
1..33
# selftests: kvm: cr4_cpuid_sync_test
ok 1 selftests: kvm: cr4_cpuid_sync_test
# selftests: kvm: evmcs_test
ok 2 selftests: kvm: evmcs_test
# selftests: kvm: get_cpuid_test
ok 3 selftests: kvm: get_cpuid_test
# selftests: kvm: hyperv_cpuid
ok 4 selftests: kvm: hyperv_cpuid
# selftests: kvm: kvm_pv_test
# testing msr: MSR_KVM_SYSTEM_TIME (0x12)
# testing msr: MSR_KVM_SYSTEM_TIME_NEW (0x4b564d01)
# testing msr: MSR_KVM_WALL_CLOCK (0x11)
# testing msr: MSR_KVM_WALL_CLOCK_NEW (0x4b564d00)
# testing msr: MSR_KVM_ASYNC_PF_EN (0x4b564d02)
# testing msr: MSR_KVM_STEAL_TIME (0x4b564d03)
# testing msr: MSR_KVM_PV_EOI_EN (0x4b564d04)
# testing msr: MSR_KVM_POLL_CONTROL (0x4b564d05)
# testing msr: MSR_KVM_ASYNC_PF_INT (0x4b564d06)
# testing msr: MSR_KVM_ASYNC_PF_ACK (0x4b564d07)
# testing hcall: KVM_HC_KICK_CPU (5)
# testing hcall: KVM_HC_SEND_IPI (10)
# testing hcall: KVM_HC_SCHED_YIELD (11)
ok 5 selftests: kvm: kvm_pv_test
# selftests: kvm: mmio_warning_test
# ret1=0 exit_reason=17 suberror=1
# ret1=0 exit_reason=8 suberror=65530
# ret1=0 exit_reason=17 suberror=1
# ret1=0 exit_reason=8 suberror=65530
# ret1=0 exit_reason=17 suberror=1
# ret1=0 exit_reason=8 suberror=65530
# ret1=0 exit_reason=17 suberror=1
# ret1=0 exit_reason=8 suberror=65530
# ret1=0 exit_reason=17 suberror=1
# ret1=0 exit_reason=8 suberror=65530
# ret1=0 exit_reason=17 suberror=1
# ret1=0 exit_reason=8 suberror=65530
# ret1=0 exit_reason=17 suberror=1
# ret1=0 exit_reason=8 suberror=65530
# ret1=0 exit_reason=17 suberror=1
# ret1=0 exit_reason=8 suberror=65530
# ret1=0 exit_reason=17 suberror=1
# ret1=0 exit_reason=8 suberror=65530
# ret1=0 exit_reason=17 suberror=1
# ret1=0 exit_reason=8 suberror=65530
ok 6 selftests: kvm: mmio_warning_test
# selftests: kvm: platform_info_test
ok 7 selftests: kvm: platform_info_test
# selftests: kvm: set_sregs_test
ok 8 selftests: kvm: set_sregs_test
# selftests: kvm: smm_test
ok 9 selftests: kvm: smm_test
# selftests: kvm: state_test
ok 10 selftests: kvm: state_test
# selftests: kvm: vmx_preemption_timer_test
# ==== Test Assertion Failure ====
#   x86_64/vmx_preemption_timer_test.c:230: uc.args[4] < uc.args[5]
#   pid=1624 tid=1624 - Interrupted system call
#      1	0x000000000040166f: main at vmx_preemption_timer_test.c:230
#      2	0x00007f346d85809a: ?? ??:0
#      3	0x00000000004016b9: _start at ??:?
#   Stage 2: L2 PT expiry TSC (3202417017) > L2 TSC deadline (3202312608)
# Stage 2: L1 PT expiry TSC (3202584596) , L1 TSC deadline (3202189728)
# Stage 2: L2 PT expiry TSC (3202417017) , L2 TSC deadline (3202312608)
not ok 11 selftests: kvm: vmx_preemption_timer_test # exit=254
# selftests: kvm: svm_vmcall_test
# nested SVM not enabled, skipping test
ok 12 selftests: kvm: svm_vmcall_test # SKIP
# selftests: kvm: sync_regs_test
ok 13 selftests: kvm: sync_regs_test
# selftests: kvm: userspace_msr_exit_test
# To run the instruction emulated tests set the module parameter 'kvm.force_emulation_prefix=1'
ok 14 selftests: kvm: userspace_msr_exit_test
# selftests: kvm: vmx_apic_access_test
ok 15 selftests: kvm: vmx_apic_access_test
# selftests: kvm: vmx_close_while_nested_test
ok 16 selftests: kvm: vmx_close_while_nested_test
# selftests: kvm: vmx_dirty_log_test
ok 17 selftests: kvm: vmx_dirty_log_test
# selftests: kvm: vmx_set_nested_state_test
ok 18 selftests: kvm: vmx_set_nested_state_test
# selftests: kvm: vmx_tsc_adjust_test
# IA32_TSC_ADJUST is -4294975532 (-1 * TSC_ADJUST_VALUE + -8236).
# IA32_TSC_ADJUST is -4294975532 (-1 * TSC_ADJUST_VALUE + -8236).
# IA32_TSC_ADJUST is -8589947763 (-2 * TSC_ADJUST_VALUE + -13171).
# IA32_TSC_ADJUST is -8589947763 (-2 * TSC_ADJUST_VALUE + -13171).
ok 19 selftests: kvm: vmx_tsc_adjust_test
# selftests: kvm: xapic_ipi_test
# Halter vCPU thread started
# vCPU thread running vCPU 0
# Halter vCPU thread reported its APIC ID: 0 after 1 seconds.
# IPI sender vCPU thread started. Letting vCPUs run for 3 seconds.
# vCPU thread running vCPU 1
# Test successful after running for 3 seconds.
# Sending vCPU sent 226153 IPIs to halting vCPU
# Halting vCPU halted 226153 times, woke 226152 times, received 226152 IPIs.
# Halter APIC ID=0
# Sender ICR value=0xa5 ICR2 value=0
# Halter TPR=0 PPR=0 LVR=0x50014
# Migrations attempted: 0
# Migrations completed: 0
ok 20 selftests: kvm: xapic_ipi_test
# selftests: kvm: xss_msr_test
# IA32_XSS is not supported by the vCPU, skipping test
ok 21 selftests: kvm: xss_msr_test # SKIP
# selftests: kvm: debug_regs
ok 22 selftests: kvm: debug_regs
# selftests: kvm: tsc_msrs_test
ok 23 selftests: kvm: tsc_msrs_test
# selftests: kvm: vmx_pmu_msrs_test
ok 24 selftests: kvm: vmx_pmu_msrs_test
# selftests: kvm: xen_shinfo_test
ok 25 selftests: kvm: xen_shinfo_test
# selftests: kvm: xen_vmcall_test
ok 26 selftests: kvm: xen_vmcall_test
# selftests: kvm: demand_paging_test
# Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
# guest physical test memory offset: 0x7fbffff000
# Finished creating vCPUs and starting uffd threads
# Started all vCPUs
# All vCPU threads joined
# Total guest execution time: 1.974251864s
# Overall demand paging rate: 24402.471039 pgs/sec
ok 27 selftests: kvm: demand_paging_test
# selftests: kvm: dirty_log_test
# Test iterations: 32, interval: 10 (ms)
# Testing Log Mode 'dirty-log'
# Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
# guest physical test memory offset: 0x7fbfffc000
# Dirtied 1024 pages
# Total bits checked: dirty (68574), clear (8057983), track_next (4131)
# Testing Log Mode 'clear-log'
# Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
# guest physical test memory offset: 0x7fbfffc000
# Dirtied 1024 pages
# Total bits checked: dirty (334670), clear (7791887), track_next (4882)
# Testing Log Mode 'dirty-ring'
# Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
# guest physical test memory offset: 0x7fbfffc000
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 1 collected 1431 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 2 collected 34922 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 3 collected 1223 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 4 collected 1161 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 5 collected 1165 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 6 collected 1161 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 7 collected 1166 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 8 collected 1161 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 9 collected 1168 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 10 collected 1162 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 11 collected 1165 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 12 collected 1175 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 13 collected 1176 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 14 collected 1173 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 15 collected 1168 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 16 collected 1179 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 17 collected 1186 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 18 collected 1182 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 19 collected 1188 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 20 collected 1190 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 21 collected 1195 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 22 collected 1180 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 23 collected 1161 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 24 collected 1143 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 25 collected 1150 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 26 collected 1160 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 27 collected 1156 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 28 collected 1160 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 29 collected 1166 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 30 collected 678 pages
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# Iteration 31 collected 849 pages
# vcpu stops because dirty ring is full...
# vcpu continues now.
# Dirtied 32768 pages
# Total bits checked: dirty (69500), clear (8057057), track_next (4738)
ok 28 selftests: kvm: dirty_log_test
# selftests: kvm: dirty_log_perf_test
# Test iterations: 2
# Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
# guest physical test memory offset: 0x7fbffff000
# Populate memory time: 1.757522426s
# Enabling dirty logging time: 0.000847277s
# 
# Iteration 1 dirty memory time: 0.007516471s
# Iteration 1 get dirty log time: 0.000023223s
# Iteration 1 clear dirty log time: 0.005255955s
# Iteration 2 dirty memory time: 0.337495562s
# Iteration 2 get dirty log time: 0.000004629s
# Iteration 2 clear dirty log time: 0.004977110s
# Disabling dirty logging time: 0.007335534s
# Get dirty log over 2 iterations took 0.000027852s. (Avg 0.000013926s/iteration)
# Clear dirty log over 2 iterations took 0.010233065s. (Avg 0.005116532s/iteration)
ok 29 selftests: kvm: dirty_log_perf_test
# selftests: kvm: kvm_create_max_vcpus
# KVM_CAP_MAX_VCPU_ID: 1023
# KVM_CAP_MAX_VCPUS: 288
# Testing creating 288 vCPUs, with IDs 0...287.
# Testing creating 288 vCPUs, with IDs 735...1022.
ok 30 selftests: kvm: kvm_create_max_vcpus
# selftests: kvm: memslot_modification_stress_test
# ==== Test Assertion Failure ====
#   lib/kvm_util.c:693: (((guest_paddr >> vm->page_shift) + npages) - 1) <= vm->max_gfn
#   pid=2134 tid=2134 - Success
#      1	0x0000000000405291: vm_userspace_mem_region_add at kvm_util.c:693
#      2	0x000000000040281d: add_remove_memslot at memslot_modification_stress_test.c:80
#      3	 (inlined by) run_test at memslot_modification_stress_test.c:122
#      4	0x0000000000408033: for_each_guest_mode at guest_modes.c:37
#      5	0x0000000000402494: main at memslot_modification_stress_test.c:209
#      6	0x00007f6dbe3f709a: ?? ??:0
#      7	0x0000000000402539: _start at ??:?
#   Physical range beyond maximum supported physical address,
#   guest_paddr: 0x8000001000 npages: 0x1
#   vm->max_gfn: 0x7ffffff vm->page_size: 0x1000
# Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
# guest physical test memory offset: 0x7fbffff000
# Finished creating vCPUs
# Started all vCPUs
not ok 31 selftests: kvm: memslot_modification_stress_test # exit=254
# selftests: kvm: set_memory_region_test
# Testing KVM_RUN with zero added memory regions
# Allowed number of memory slots: 32764
# Adding slots 0..32763, each memory region with 2048K size
#
not ok 32 selftests: kvm: set_memory_region_test # TIMEOUT 120 seconds
# selftests: kvm: steal_time
ok 33 selftests: kvm: steal_time



To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install                job.yaml  # job file is attached in this email
        bin/lkp split-job --compatible job.yaml
        bin/lkp run                    compatible-job.yaml



---
0DAY/LKP+ Test Infrastructure                   Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org       Intel Corporation

Thanks,
Oliver Sang


View attachment "config-5.11.0-rc3-00160-g7c039810912c" of type "text/plain" (173083 bytes)

View attachment "job-script" of type "text/plain" (6274 bytes)

View attachment "kernel-selftests" of type "text/plain" (12972 bytes)

View attachment "job.yaml" of type "text/plain" (5413 bytes)

View attachment "reproduce" of type "text/plain" (189 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ