lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200925101030.GA2594@hirez.programming.kicks-ass.net>
Date:   Fri, 25 Sep 2020 12:10:30 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Dietmar Eggemann <dietmar.eggemann@....com>
Cc:     tglx@...utronix.de, mingo@...nel.org, linux-kernel@...r.kernel.org,
        bigeasy@...utronix.de, qais.yousef@....com, swood@...hat.com,
        valentin.schneider@....com, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, rostedt@...dmis.org,
        bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com,
        vincent.donnefort@....com
Subject: Re: [PATCH 0/9] sched: Migrate disable support

On Fri, Sep 25, 2020 at 11:12:09AM +0200, Dietmar Eggemann wrote:

> I get this when running 6 (periodic) RT50 tasks with CPU hp stress on my
> 6 CPU JUNO board (!CONFIG_PREEMPT_RT).
> 
> [   55.490263] ------------[ cut here ]------------
> [   55.505261] Modules linked in:
> [   55.508322] CPU: 3 PID: 24 Comm: migration/3 Not tainted
> 5.9.0-rc1-00132-gc096e6406c50-dirty #90
> [   55.517119] Hardware name: ARM Juno development board (r0) (DT)
> [   55.523058] Stopper: multi_cpu_stop+0x0/0x170 <- 0x0
> [   55.528029] pstate: 20000085 (nzCv daIf -PAN -UAO BTYPE=--)
> [   55.533612] pc : sched_cpu_dying+0x124/0x130
> [   55.537887] lr : sched_cpu_dying+0xd8/0x130
> [   55.542071] sp : ffff800011f0bca0
> [   55.545385] x29: ffff800011f0bca0 x28: 0000000000000002
> [   55.550703] x27: 0000000000000000 x26: 0000000000000060
> [   55.556022] x25: 0000000000000000 x24: 0000000000000001
> [   55.561340] x23: 0000000000000000 x22: 0000000000000003
> [   55.566659] x21: 0000000000000080 x20: 0000000000000003
> [   55.571977] x19: ffff00097ef9e1c0 x18: 0000000000000010
> [   55.577295] x17: 0000000000000000 x16: 0000000000000000
> [   55.582613] x15: 0000000000000000 x14: 000000000000015c
> [   55.587932] x13: 0000000000000000 x12: 00000000000006f1
> [   55.593250] x11: 0000000000000080 x10: 0000000000000000
> [   55.598567] x9 : 0000000000000003 x8 : ffff0009743f5900
> [   55.603886] x7 : 0000000000000003 x6 : 0000000000000000
> [   55.609204] x5 : 0000000000000001 x4 : 0000000000000002
> [   55.614521] x3 : 0000000000000000 x2 : 0000000000000013
> [   55.619839] x1 : 0000000000000008 x0 : 0000000000000003
> [   55.625158] Call trace:
> [   55.627607]  sched_cpu_dying+0x124/0x130
> [   55.631535]  cpuhp_invoke_callback+0x88/0x210
> [   55.635897]  take_cpu_down+0x7c/0xd8
> [   55.639475]  multi_cpu_stop+0xac/0x170
> [   55.643227]  cpu_stopper_thread+0x98/0x130
> [   55.647327]  smpboot_thread_fn+0x1c4/0x280
> [   55.651427]  kthread+0x140/0x160
> [   55.654658]  ret_from_fork+0x10/0x34
> [   55.658239] Code: f000e1c1 913fc021 1400034a 17ffffde (d4210000)
> [   55.664342] ---[ end trace c5b8988b7b701e56 ]---
> [   55.668963] note: migration/3[24] exited with preempt_count 3
> 
> 7309 int sched_cpu_dying(unsigned int cpu)
>     ...
>     BUG_ON(rq->nr_running != 1 || rq_has_pinned_tasks(rq));
>     ...
> 
> rq->nr_running is always 2 here in this cases.
> 
> balance_hotplug_wait and sched_cpu_wait_empty run in cpuhp/X (CFS)
> whereas sched_cpu_dying in migration/X ?

takedown_cpu() has:

  kthread_park(per_cpu_ptr(&cpuhp_state, cpu)->thread);

before calling:

  err = stop_machine_cpuslocked(take_cpu_down, NULL, cpumask_of(cpu));

So when we get to sched_cpu_dying(), the only task that _should_ still
be there is migration/X.

Do you have any idea what thread, other than migration/X, is still
active on that CPU? per sched_cpu_wait_empty() we should've pushed out
all userspace tasks, and the cpu hotplug machinery should've put all the
per-cpu kthreads to sleep at this point.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ