[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFTL4hz7XTsxWqSgg1d2zvyz-6qvrXmgJh9Tg0t5VJ7tLD_TKg@mail.gmail.com>
Date: Mon, 4 Dec 2017 16:53:15 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: paulmck@...ux.vnet.ibm.com
Cc: linux-kernel@...r.kernel.org, xiaolong.ye@...el.com,
tglx@...utronix.de, cmetcalf@...lanox.com, cl@...ux.com,
torvalds@...ux-foundation.org, lcapitulino@...hat.com,
efault@....de, peterz@...radead.org, riel@...hat.com,
kernellwp@...il.com, mingo@...nel.org, john.stultz@...aro.org
Subject: Re: [PATCH] sched/isolation: Make NO_HZ_FULL select CPU_ISOLATION
2017-12-02 20:24 UTC+01:00, Paul E. McKenney <paulmck@...ux.vnet.ibm.com>:
> On Sat, Dec 02, 2017 at 02:59:12PM +0100, Frederic Weisbecker wrote:
>> 2017-11-30 21:20 UTC+01:00, Paul E. McKenney
>> <paulmck@...ux.vnet.ibm.com>:
>> > Commit 5c4991e24c69 ("sched/isolation: Split out new
>> > CONFIG_CPU_ISOLATION=y config from CONFIG_NO_HZ_FULL") can result in
>> > RCU
>> > CPU stall warnings when running rcutorture with CONFIG_NO_HZ_FULL_ALL=y
>> > and CONFIG_CPU_ISOLATION=n. These warnings are caused by RCU's
>> > grace-period kthreads sleeping for a few jiffies, but never being
>> > awakened:
>> >
>> > [ 116.353432] rcu_preempt kthread starved for 9974 jiffies!
>> > g4294967208
>> > +c4294967207 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x402 ->cpu=0
>> > [ 116.355517] rcu_preempt I 7464 8 2 0x80000000
>> > [ 116.356543] Call Trace:
>> > [ 116.357008] __schedule+0x493/0x620
>> > [ 116.357682] schedule+0x24/0x40
>> > [ 116.358291] schedule_timeout+0x330/0x3b0
>> > [ 116.359024] ? preempt_count_sub+0xea/0x140
>> > [ 116.359806] ? collect_expired_timers+0xb0/0xb0
>> > [ 116.360660] rcu_gp_kthread+0x6bf/0xef0
>> >
>> > This commit therefore makes NO_HZ_FULL select CPU_ISOLATION, which
>> > prevents this behavior and seems like it was the original intention in
>> > any case.
>>
>> Although CONFIG_NO_HZ should indeed select CONFIG_CPU_ISOLATION, I'm
>> surprised about this stall. I'm even more surprised that setting
>> CONFIG_CPU_ISOLATION=y is enough to fix the issue because
>> CONFIG_NO_HZ_FULL_ALL shortcuts CONFIG_CPU_ISOLATION entirely (which
>> is not good, but work in progress...).
>
> Yes, and after applying this patch, I get failures a few commits
> later, which appears to be due to other changes that break
> CONFIG_NO_HZ_FULL_ALL=y. So I have another patch staged that removes
> CONFIG_NO_HZ_FULL_ALL, on the grounds that no one else has complained,
> so rcutorture is likely to be the only user, and I don't see the point
> of having a Kconfig option for only one user.
I would prefer to keep it. It's useful for automated boot testing
based on configs such as 0-day or -tip test machines. But I'm likely
to migrate it to isolcpus implementation. Maybe something along the
lines of CONFIG_CPU_ISOLATION_ALL.
>
>> Did you have any nohz_full= or isolcpus= boot options?
>
> Replacing CONFIG_NO_HZ_FULL_ALL=y with nohz_full=1-7 works, that
> is CONFIG_NO_HZ_FULL=y, CONFIG_NO_HZ_FULL_ALL=n, and nohz_full=1-7
> on an eight-CPU test.
>
> But it is relatively easy to test. Running the rcutorture TREE04
> scenario on a four-socket x86 gets me RCU CPU stall warnings within
> a few minutes more than half the time. ;-)
Indeed I managed to trigger something. If it's the same thing I should
be able to track down the root cause.
[ 123.907557] ??? Writer stall state RTWS_STUTTER(8) g160 c160 f0x0
->state 0x1 cpu 2
[ 123.915184] rcu_torture_wri S 0 111 2 0x80080000
[ 123.920673] Call Trace:
[ 123.923096] ? __schedule+0x2bf/0xbb0
[ 123.926715] ? _raw_spin_unlock_irqrestore+0x59/0x70
[ 123.931657] schedule+0x3c/0x90
[ 123.934777] schedule_timeout+0x1e1/0x560
[ 123.938785] ? __next_timer_interrupt+0xd0/0xd0
[ 123.943276] stutter_wait+0xc5/0xe0
[ 123.946738] rcu_torture_writer+0x1ae/0x730
[ 123.950912] ? rcu_torture_pipe_update+0xf0/0xf0
[ 123.955491] kthread+0x15f/0x1a0
[ 123.958702] ? kthread_unpark+0x60/0x60
[ 123.962523] ret_from_fork+0x24/0x30
[ 123.966091] rcu_preempt: wait state: 1 ->state: 0x402
[ 123.971112] rcu_sched: wait state: 1 ->state: 0x402
[ 123.975953] rcu_bh: wait state: 1 ->state: 0x402
Thanks!
Powered by blists - more mailing lists