[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131219181410.GB32508@gmail.com>
Date: Thu, 19 Dec 2013 19:14:10 +0100
From: Ingo Molnar <mingo@...nel.org>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Len Brown <lenb@...nel.org>,
x86@...nel.org, linux-pm@...r.kernel.org,
linux-kernel@...r.kernel.org, Len Brown <len.brown@...el.com>,
stable@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Mike Galbraith <efault@....de>, Borislav Petkov <bp@...en8.de>
Subject: Re: [PATCH] x86 idle: repair large-server 50-watt idle-power
regression
* H. Peter Anvin <hpa@...or.com> wrote:
> On 12/19/2013 09:36 AM, Peter Zijlstra wrote:
> > On Thu, Dec 19, 2013 at 06:25:35PM +0100, Peter Zijlstra wrote:
> >> That said, I would find it very strange indeed if a CLFLUSH doesn't also
> >> flush the store buffer.
> >
> > OK, it explicitly states it does not do that and you indeed need
> > an mfence before the clflush.
>
> So, MONITOR is defined to be ordered as a load, which I think should
> be adequate, but I really wonder if we should have mfence on both
> sides of clflush. This now is up to 9 bytes, and perhaps pushing it
> a bit with how much we would be willing to patch out.
>
> On the other hand - the CLFLUSH seems to have worked well enough by
> itself, and this is all probabilistic anyway, so perhaps we should
> just leave the naked CLFLUSH in and not worry about it unless
> measurements say otherwise?
So I think the window of breakage was rather large here, and since it
seems to trigger on rare types of hardware I think we'd be better off
by erring on the side of robustness this time around ...
This is the 'go to idle' path, which isn't as time-critical as the
'get out of idle' code path.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists