[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181107094511.GG24195@shao2-debian>
Date: Wed, 7 Nov 2018 17:45:11 +0800
From: kernel test robot <rong.a.chen@...el.com>
To: "Paul E. McKenney" <paulmck@...ux.ibm.com>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>, lkp@...org
Subject: [LKP] [EXP rcu] 258ba8e089:
WARNING:at_kernel/rcu/rcutorture.c:#rcu_torture_fwd_prog
FYI, we noticed the following commit (built with gcc-7):
commit: 258ba8e089db23f760139266c232f01bad73f85c ("EXP rcu: Revert expedited GP parallelization cleverness")
https://git.kernel.org/cgit/linux/kernel/git/paulmck/linux-rcu.git bigeasy.2018.10.23a
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap -smp 2 -m 512M
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-----------------------------------------------------------+------------+------------+
| | 9be9040bd1 | 258ba8e089 |
+-----------------------------------------------------------+------------+------------+
| boot_successes | 6 | 8 |
| boot_failures | 22 | 26 |
| IP-Config:Auto-configuration_of_network_failed | 22 | 21 |
| invoked_oom-killer:gfp_mask=0x | 2 | 2 |
| Mem-Info | 1 | |
| Out_of_memory:Kill_process | 1 | |
| WARNING:at_kernel/rcu/rcutorture.c:#rcutorture_oom_notify | 0 | 6 |
| RIP:rcutorture_oom_notify | 0 | 6 |
| WARNING:possible_circular_locking_dependency_detected | 0 | 2 |
| WARNING:at_kernel/rcu/rcutorture.c:#rcu_torture_fwd_prog | 0 | 6 |
| RIP:rcu_torture_fwd_prog | 0 | 6 |
| RIP:__put_user_4 | 0 | 1 |
| calltrace:irq_exit | 0 | 2 |
+-----------------------------------------------------------+------------+------------+
[ 372.693208] WARNING: CPU: 1 PID: 64 at kernel/rcu/rcutorture.c:1827 rcu_torture_fwd_prog+0xde1/0xf18
[ 372.693208] CPU: 1 PID: 64 Comm: rcu_torture_fwd Not tainted 4.19.0-rc1-00225-g258ba8e #1
[ 372.693208] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 372.693208] RIP: 0010:rcu_torture_fwd_prog+0xde1/0xf18
[ 372.693208] Code: d8 23 00 48 8b 85 68 f9 ff ff 8a 95 10 fc ff ff 48 c1 e8 03 c6 04 18 f8 84 d2 0f 85 97 00 00 00 48 83 bd a8 f9 ff ff 63 7f 02 <0f> 0b 4c 8b 85 98 f9 ff ff 4c 03 85 90 f9 ff ff 48 8b 85 a0 f9 ff
[ 372.693208] RSP: 0000:ffff88000bb57840 EFLAGS: 00010293
[ 372.693208] RAX: 1ffff1000176af60 RBX: dffffc0000000000 RCX: 0000000000000000
[ 372.693208] RDX: ffffed000176af00 RSI: 0000000000000000 RDI: ffff88000c11c9f0
[ 372.693208] RBP: ffff88000bb57ef0 R08: fffffbfff0ca382d R09: fffffbfff0ca382d
[ 372.693208] R10: 0000000000000000 R11: ffffffff8651c163 R12: 00000001000033f3
[ 372.693208] R13: 000000000000018f R14: 0000000000013d8a R15: 0000000000000000
[ 372.693208] FS: 0000000000000000(0000) GS:ffff880018500000(0000) knlGS:0000000000000000
[ 372.693208] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 372.693208] CR2: 0000000000000000 CR3: 000000000645c001 CR4: 00000000000206a0
[ 372.693208] Call Trace:
[ 372.693208] ? srcu_torture_cleanup+0x31/0x31
[ 372.693208] ? debug_show_all_locks+0x441/0x441
[ 372.693208] ? lock_release+0x6b2/0x6b2
[ 372.917551] ? rcu_dynticks_curr_cpu_in_eqs+0x7a/0x10b
[ 372.917551] ? rcu_softirq_qs+0x6/0x6
[ 372.917551] ? find_held_lock+0x2d/0xf0
[ 372.917551] ? lock_release+0x5ff/0x6b2
[ 372.917551] ? finish_task_switch+0x36b/0x518
[ 372.917551] ? lock_downgrade+0x4f9/0x4f9
[ 372.917551] ? do_raw_spin_unlock+0xb7/0x290
[ 372.917551] ? state_name+0x68/0x68
[ 372.917551] ? pvclock_read_flags+0xaf/0xaf
[ 372.917551] ? trace_raw_output_preemptirq_template+0xe8/0xe8
[ 372.917551] ? __vtime_account_system+0x19/0x91
[ 372.917551] ? mark_lock+0x26/0x2f0
[ 372.917551] ? __lock_acquire+0xd71/0x2028
[ 372.917551] ? finish_task_switch+0x45d/0x518
[ 372.917551] ? __kthread_parkme+0x3d/0x13b
[ 372.917551] ? debug_show_all_locks+0x441/0x441
[ 372.917551] ? __switch_to_asm+0x40/0x70
[ 372.917551] ? __switch_to_asm+0x34/0x70
[ 372.917551] ? __switch_to_asm+0x40/0x70
[ 372.917551] ? __switch_to_asm+0x40/0x70
[ 372.917551] ? __schedule+0x1046/0x1089
[ 372.917551] ? firmware_map_remove+0xd7/0xd7
[ 372.917551] ? lockdep_hardirqs_off+0xd7/0x21e
[ 372.917551] ? find_held_lock+0x2d/0xf0
[ 373.117489] ? __kthread_parkme+0x9f/0x13b
[ 373.117489] ? lock_downgrade+0x4f9/0x4f9
[ 373.117489] ? schedule+0x2ef/0x35f
[ 373.117489] ? do_raw_spin_unlock+0xb7/0x290
[ 373.117489] ? do_raw_spin_trylock+0x17c/0x17c
[ 373.117489] ? trace_raw_output_preemptirq_template+0xe8/0xe8
[ 373.117489] ? trace_raw_output_preemptirq_template+0xe8/0xe8
[ 373.117489] ? _raw_spin_unlock_irqrestore+0x3d/0x4f
[ 373.117489] ? __kthread_parkme+0x28/0x13b
[ 373.117489] ? lockdep_hardirqs_on+0x427/0x476
[ 373.117489] ? kthread+0x2c6/0x2d5
[ 373.117489] ? srcu_torture_cleanup+0x31/0x31
[ 373.117489] kthread+0x2c6/0x2d5
[ 373.117489] ? srcu_torture_cleanup+0x31/0x31
[ 373.117489] ? __kthread_cancel_work+0x25c/0x25c
[ 373.117489] ret_from_fork+0x3a/0x50
[ 373.117489] irq event stamp: 1649480
[ 373.117489] hardirqs last enabled at (1649479): [<ffffffff84a72a7d>] _raw_spin_unlock_irqrestore+0x3d/0x4f
[ 373.317519] hardirqs last disabled at (1649480): [<ffffffff81003046>] trace_hardirqs_off_thunk+0x1a/0x1c
[ 373.317519] softirqs last enabled at (1648500): [<ffffffff84e00718>] __do_softirq+0x718/0x7ee
[ 373.317519] softirqs last disabled at (1648493): [<ffffffff811101d5>] irq_exit+0x7d/0x14d
[ 373.317519] random: get_random_bytes called from init_oops_id+0x1d/0x2c with crng_init=0
[ 373.317519] ---[ end trace 2537eb32c30d17a7 ]---
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
View attachment "config-4.19.0-rc1-00225-g258ba8e" of type "text/plain" (130752 bytes)
View attachment "job-script" of type "text/plain" (3941 bytes)
Download attachment "dmesg.xz" of type "application/x-xz" (22684 bytes)
Powered by blists - more mailing lists