[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1367505199.30667.132.camel@gandalf.local.home>
Date: Thu, 02 May 2013 10:33:19 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
RT <linux-rt-users@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Clark Williams <clark@...hat.com>,
John Kacur <jkacur@...hat.com>,
Tony Luck <tony.luck@...el.com>,
Borislav Petkov <bp@...en8.de>,
Mauro Carvalho Chehab <mchehab@...hat.com>,
Ingo Molnar <mingo@...nel.org>,
"H. Peter Anvin" <hpa@...ux.intel.com>
Subject: Re: [PATCH RT v2] x86/mce: Defer mce wakeups to threads for
PREEMPT_RT
On Fri, 2013-04-26 at 10:41 +0200, Sebastian Andrzej Siewior wrote:
> * Steven Rostedt | 2013-04-11 14:33:34 [-0400]:
>
> >As wait queue locks are notorious for long hold times, we can not
> >convert them to raw_spin_locks without causing issues with -rt. But
> >Thomas has created a "simple-wait" structure that uses raw spin locks
> >which may have been a good fit.
> >
> >Unfortunately, wait queues are not the only issue, as the mce_notify_irq
> >also does a schedule_work(), which grabs the workqueue spin locks that
> >have the exact same issue.
>
> mce_notify_irq() can use simple_waitqueue, no?
Yeah, and I went down that path.
But it also schedules work, which has the issue.
> The other issue is that mce_report_event() is scheduling a per-cpu
> workqueue (mce_schedule_work) in case of a memory fault. This has the
> same issue.
Yeah, that looks like it can be an issue too. I wonder if we can use the
same thread and use flags check what to do. Atomically set the flag for
the function to perform, and then have the thread clear it before doing
the function and only go to sleep when all flags are cleared.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists