[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAERHkrvgMNf2rmQ_pF5S7Wq64fnkW4HzP_VYPL4vQWyKgHPgxA@mail.gmail.com>
Date: Wed, 13 Mar 2019 13:55:24 +0800
From: Aubrey Li <aubrey.intel@...il.com>
To: Subhra Mazumdar <subhra.mazumdar@...cle.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Paul Turner <pjt@...gle.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"Fr?d?ric Weisbecker" <fweisbec@...il.com>,
Kees Cook <keescook@...omium.org>,
Greg Kerr <kerrnel@...gle.com>
Subject: Re: [RFC][PATCH 00/16] sched: Core scheduling
On Tue, Mar 12, 2019 at 3:45 PM Aubrey Li <aubrey.intel@...il.com> wrote:
>
> On Tue, Mar 12, 2019 at 7:36 AM Subhra Mazumdar
> <subhra.mazumdar@...cle.com> wrote:
> >
> >
> > On 3/11/19 11:34 AM, Subhra Mazumdar wrote:
> > >
> > > On 3/10/19 9:23 PM, Aubrey Li wrote:
> > >> On Sat, Mar 9, 2019 at 3:50 AM Subhra Mazumdar
> > >> <subhra.mazumdar@...cle.com> wrote:
> > >>> expected. Most of the performance recovery happens in patch 15 which,
> > >>> unfortunately, is also the one that introduces the hard lockup.
> > >>>
> > >> After applied Subhra's patch, the following is triggered by enabling
> > >> core sched when a cgroup is
> > >> under heavy load.
> > >>
> > > It seems you are facing some other deadlock where printk is involved.
> > > Can you
> > > drop the last patch (patch 16 sched: Debug bits...) and try?
> > >
> > > Thanks,
> > > Subhra
> > >
> > Never Mind, I am seeing the same lockdep deadlock output even w/o patch
> > 16. Btw
> > the NULL fix had something missing,
>
> One more NULL pointer dereference:
>
> Mar 12 02:24:46 aubrey-ivb kernel: [ 201.916741] core sched enabled
> [ 201.950203] BUG: unable to handle kernel NULL pointer dereference
> at 0000000000000008
> [ 201.950254] ------------[ cut here ]------------
> [ 201.959045] #PF error: [normal kernel read fault]
> [ 201.964272] !se->on_rq
> [ 201.964287] WARNING: CPU: 22 PID: 2965 at kernel/sched/fair.c:6849
> set_next_buddy+0x52/0x70
A quick workaround below:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1d0dac4fd94f..ef6acfe2cf7d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6834,7 +6834,7 @@ static void set_last_buddy(struct sched_entity *se)
return;
for_each_sched_entity(se) {
- if (SCHED_WARN_ON(!se->on_rq))
+ if (SCHED_WARN_ON(!(se && se->on_rq))
return;
cfs_rq_of(se)->last = se;
}
@@ -6846,7 +6846,7 @@ static void set_next_buddy(struct sched_entity *se)
return;
for_each_sched_entity(se) {
- if (SCHED_WARN_ON(!se->on_rq))
+ if (SCHED_WARN_ON(!(se && se->on_rq))
return;
cfs_rq_of(se)->next = se;
}
And now I'm running into a hard LOCKUP:
[ 326.336279] NMI watchdog: Watchdog detected hard LOCKUP on cpu 31
[ 326.336280] Modules linked in: ipt_MASQUERADE xfrm_user xfrm_algo
iptable_nat nf_nat_ipv4 xt_addrtype iptable_filter ip_tables
xt_conntrack x_tables nf_nat nf_conntracki
[ 326.336311] irq event stamp: 164460
[ 326.336312] hardirqs last enabled at (164459):
[<ffffffff810c7a97>] sched_core_balance+0x247/0x470
[ 326.336312] hardirqs last disabled at (164460):
[<ffffffff810c7963>] sched_core_balance+0x113/0x470
[ 326.336313] softirqs last enabled at (164250):
[<ffffffff81e00359>] __do_softirq+0x359/0x40a
[ 326.336314] softirqs last disabled at (164213):
[<ffffffff81095be1>] irq_exit+0xc1/0xd0
[ 326.336315] CPU: 31 PID: 0 Comm: swapper/31 Tainted: G I
5.0.0-rc8-00542-gd697415be692-dirty #15
[ 326.336316] Hardware name: Intel Corporation S2600CP/S2600CP, BIOS
SE5C600.86B.99.99.x058.082120120902 08/21/2012
[ 326.336317] RIP: 0010:native_queued_spin_lock_slowpath+0x18f/0x1c0
[ 326.336318] Code: c1 ee 12 83 e0 03 83 ee 01 48 c1 e0 05 48 63 f6
48 05 80 51 1e 00 48 03 04 f5 40 58 39 82 48 89 10 8b 42 08 85 c0 75
09 f3 90 <8b> 42 08 85 c0 74 f7 4b
[ 326.336318] RSP: 0000:ffffc9000643bd58 EFLAGS: 00000046
[ 326.336319] RAX: 0000000000000000 RBX: ffff888c0ade4400 RCX: 0000000000800000
[ 326.336320] RDX: ffff88980bbe5180 RSI: 0000000000000019 RDI: ffff888c0ade4400
[ 326.336321] RBP: ffff888c0ade4400 R08: 0000000000800000 R09: 00000000001e3a80
[ 326.336321] R10: ffffc9000643bd08 R11: 0000000000000000 R12: 0000000000000000
[ 326.336322] R13: 0000000000000000 R14: ffff88980bbe4400 R15: 000000000000001f
[ 326.336323] FS: 0000000000000000(0000) GS:ffff88980ba00000(0000)
knlGS:0000000000000000
[ 326.336323] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 326.336324] CR2: 00007fdcd7fd7728 CR3: 00000017e821a001 CR4: 00000000000606e0
[ 326.336325] Call Trace:
[ 326.336325] do_raw_spin_lock+0xab/0xb0
[ 326.336326] _raw_spin_lock+0x4b/0x60
[ 326.336326] double_rq_lock+0x99/0x140
[ 326.336327] sched_core_balance+0x11e/0x470
[ 326.336327] __balance_callback+0x49/0xa0
[ 326.336328] __schedule+0x1113/0x1570
[ 326.336328] schedule_idle+0x1e/0x40
[ 326.336329] do_idle+0x16b/0x2a0
[ 326.336329] cpu_startup_entry+0x19/0x20
[ 326.336330] start_secondary+0x17f/0x1d0
[ 326.336331] secondary_startup_64+0xa4/0xb0
[ 330.959367] ---[ end Kernel panic - not syncing: Hard LOCKUP ]---
Powered by blists - more mailing lists