[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5B8DA87D05A7694D9FA63FD143655C1B749D3C1C@hasmsx108.ger.corp.intel.com>
Date: Sun, 30 Jul 2017 09:16:56 +0000
From: "Winkler, Tomas" <tomas.winkler@...el.com>
To: Dominik Brodowski <linux@...inikbrodowski.net>
CC: "Usyskin, Alexander" <alexander.usyskin@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: MEI-related WARN_ON() triggered during resume-from-sleep on
v4.12-rc2+
> -----Original Message-----
> From: Dominik Brodowski [mailto:linux@...inikbrodowski.net]
> Sent: Sunday, July 30, 2017 11:59
> To: Winkler, Tomas <tomas.winkler@...el.com>
> Cc: Usyskin, Alexander <alexander.usyskin@...el.com>; linux-
> kernel@...r.kernel.org
> Subject: MEI-related WARN_ON() triggered during resume-from-sleep on
> v4.12-rc2+
>
> Thomas,
>
> on Linus' most recent kernel (v4.12-rc2, git head 0a07b238e5f48), I see the
> following message on my Dell XPS13 when resuming from sleep. MEI is,
> AFAIK, not being used on this system:
>
Thanks for the report, we haven't change the logic of this code since 4.12 so we need to look for
changes in pci and/or pm subsystems. We try to bisect it.
Thanks
> [ 192.940537] Restarting tasks ...
> [ 192.940610] PGI is not set
> [ 192.940619] ------------[ cut here ]------------ [ 192.940623] WARNING: CPU: 0
> PID: 1661 at /home/brodo/local/kernel/git/linux/drivers/misc/mei/hw-
> me.c:653 mei_me_pg_exit_sync+0x351/0x360 [ 192.940624] Modules linked
> in:
> [ 192.940627] CPU: 0 PID: 1661 Comm: kworker/0:3 Not tainted 4.13.0-rc2+
> #2 [ 192.940628] Hardware name: Dell Inc. XPS 13 9343/0TM99H, BIOS A11
> 12/08/2016 [ 192.940630] Workqueue: pm pm_runtime_work <snip> [
> 192.940642] Call Trace:
> [ 192.940646] ? pci_pme_active+0x1de/0x1f0 [ 192.940649] ?
> pci_restore_standard_config+0x50/0x50
> [ 192.940651] ? kfree+0x172/0x190
> [ 192.940653] ? kfree+0x172/0x190
> [ 192.940655] ? pci_restore_standard_config+0x50/0x50
> [ 192.940663] mei_me_pm_runtime_resume+0x3f/0xc0
> [ 192.940665] pci_pm_runtime_resume+0x7a/0xa0 [ 192.940667]
> __rpm_callback+0xb9/0x1e0 [ 192.940668] ?
> preempt_count_add+0x6d/0xc0 [ 192.940670] rpm_callback+0x24/0x90 [
> 192.940672] ? pci_restore_standard_config+0x50/0x50
> [ 192.940674] rpm_resume+0x4e8/0x800
> [ 192.940676] pm_runtime_work+0x55/0xb0 [ 192.940678]
> process_one_work+0x184/0x3e0 [ 192.940680] worker_thread+0x4d/0x3a0 [
> 192.940681] ? preempt_count_sub+0x9b/0x100 [ 192.940683]
> kthread+0x122/0x140 [ 192.940684] ? process_one_work+0x3e0/0x3e0 [
> 192.940685] ? __kthread_create_on_node+0x1a0/0x1a0
> [ 192.940688] ret_from_fork+0x27/0x40
> [ 192.940690] Code: 96 3a 9e ff 48 8b 7d 98 e8 cd 21 58 00 83 bb bc 01 00 00
> 04 0f 85 40 fe ff ff e9 41 fe ff ff 48 c7 c7 5f 04 99 96 e8 93 6b 9f ff <0f> ff e9 5d
> fd ff ff e8 33 fe 99 ff 0f 1f 00 0f 1f 44 00 00 55 [ 192.940719] ---[ end trace
> a86955597774ead8 ]--- [ 192.942540] done.
>
> This doesn't / didn't happen on v4.12.
>
> Best,
> Dominik
Powered by blists - more mailing lists