[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1488851515.6858.2.camel@sandisk.com>
Date: Tue, 7 Mar 2017 01:52:09 +0000
From: Bart Van Assche <Bart.VanAssche@...disk.com>
To: "tglx@...utronix.de" <tglx@...utronix.de>,
"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>
CC: "mingo@...nel.org" <mingo@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"hpa@...or.com" <hpa@...or.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
Subject: Re: [GIT pull] CPU hotplug updates for 4.9
On Mon, 2016-10-03 at 19:37 +0200, Thomas Gleixner wrote:
> Yet another batch of cpu hotplug core updates and conversions:
>
> - Provide core infrastructure for multi instance drivers so the drivers
> do not have to keep custom lists.
>
> - Convert custom lists to the new infrastructure. The block-mq custom
> list conversion comes through the block tree and makes the diffstat
> tip over to more lines removed than added.
>
> - Handle unbalanced hotplug enable/disable calls more gracefully.
>
> - Remove the obsolete CPU_STARTING/DYING notifier support.
>
> - Convert another batch of notifier users.
>
> The relayfs changes which conflicted with the conversion have been
> shipped to me by Andrew.
>
> The remaining lot is targeted for 4.10 so that we finally can remove
> the rest of the notifiers.
Hello Thomas,
Although I'm not sure this behavior has been introduced by the changes in this
pull request, since I started testing v4.11-rc[01] I ran several times into a
cpuhp_issue_call() hang:
# ./system-log | grep -a cpuhp_issue_call
Mar 3 11:32:49 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 12:04:38 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 12:12:50 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 12:21:02 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 12:29:13 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 12:37:25 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 12:45:36 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 12:53:48 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 16:59:52 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 17:08:04 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 17:16:15 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 17:24:27 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 17:32:39 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 17:40:50 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 17:49:02 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 17:57:13 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 18:05:25 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 3 18:13:36 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 6 16:34:17 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 6 16:42:29 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 6 16:50:40 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 6 16:58:52 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 6 17:07:04 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 6 17:15:15 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 6 17:23:27 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 6 17:31:38 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 6 17:39:50 kernel: cpuhp_issue_call+0xb9/0xe0
Mar 6 17:48:01 kernel: cpuhp_issue_call+0xb9/0xe0
The latest complaint is as follows:
INFO: task systemd-udevd:837 blocked for more than 480 seconds.
Tainted: G I 4.11.0-rc1-dbg+ #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
systemd-udevd D 0 837 542 0x00000104
Call Trace:
__schedule+0x302/0xc30
schedule+0x38/0x90
schedule_timeout+0x255/0x490
wait_for_completion+0x103/0x170
cpuhp_issue_call+0xb9/0xe0
__cpuhp_setup_state+0xf6/0x180
pkg_temp_thermal_init+0x76/0x1000 [x86_pkg_temp_thermal]
do_one_initcall+0x3e/0x170
do_init_module+0x5a/0x1ed
load_module+0x2339/0x2a40
SYSC_finit_module+0xbc/0xf0
SyS_finit_module+0x9/0x10
do_syscall_64+0x57/0x140
entry_SYSCALL64_slow_path+0x25/0x25
Kernel v4.10 runs fine on the same system. From the dmidecode output:
Handle 0x0400, DMI type 4, 42 bytes
Processor Information
Socket Designation: CPU1
Type: Central Processor
Family: Xeon
Manufacturer: Intel
ID: F2 06 03 00 FF FB EB BF
Signature: Type 0, Family 6, Model 63, Stepping 2
Please let me know if you need more information.
Thanks,
Bart.
Powered by blists - more mailing lists