[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DE8DF0795D48FD4CA783C40EC829233523183E@SHSMSX101.ccr.corp.intel.com>
Date: Thu, 14 Jun 2012 08:56:11 +0000
From: "Liu, Jinsong" <jinsong.liu@...el.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
CC: "xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: xen/mce - mcelog at 100% cpu
Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 12, 2012 at 08:40:15AM -0400, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jun 12, 2012 at 07:51:03AM +0000, Liu, Jinsong wrote:
>>>> From aa2ce7440f16002266dc8464f749992d0c8ac0e5 Mon Sep 17 00:00:00
>>>> 2001
>>> From: Liu, Jinsong <jinsong.liu@...el.com>
>>> Date: Tue, 12 Jun 2012 23:11:16 +0800
>>> Subject: [PATCH] xen/mce: schedule a workqueue to avoid sleep in
>>> atomic context
>>>
>>> copy_to_user might sleep and print a stack trace if it is executed
>>> in an atomic spinlock context.
>>>
>>> This patch schedule a workqueue for IRQ handler to poll the data,
>>> and use mutex instead of spinlock, so copy_to_user sleep in atomic
>>> context would not occur.
>>
>> Ah much better. Usually one also includes the report of what the
>> stack trace was. So I've added that in.
>
> So another bug which is that mcelog is spinning at 100% CPU (and only
> under Xen).
>
> It seems to be doing:
>
> ppoll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}], 2, NULL, [], 8)
> = 1 ([{fd=3, revents=POLLIN}]) read(3, "", 2816)
> = 0
> ppoll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}], 2, NULL, [], 8)
> = 1 ([{fd=3, revents=POLLIN}]) read(3, "", 2816)
>
> constantly.
I will debug it. I have try at my platform, but fail to reproduce it. (You still use the config you send me last time, right?)
Would you tell me your step?
Thanks,
Jinsong--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists