lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131219211406.GA3464@gmail.com>
Date:	Thu, 19 Dec 2013 22:14:06 +0100
From:	Ingo Molnar <mingo@...nel.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Peter Anvin <hpa@...or.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Anvin <hpa@...ux.intel.com>,
	Len Brown <len.brown@...el.com>,
	linux-tip-commits@...r.kernel.org
Subject: Re: [tip:x86/idle] x86, idle: Use static_cpu_has() for CLFLUSH
 workaround, add barriers


* Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> On Fri, Dec 20, 2013 at 5:40 AM, Ingo Molnar <mingo@...nel.org> wrote:
> >
> > The first mb() looks superfluous, because see
> > current_set_polling_and_test():
> >
> > static inline bool __must_check current_set_polling_and_test(void)
> > {
> >         __current_set_polling();
> >
> >         /*
> >          * Polling state must be visible before we test NEED_RESCHED,
> >          * paired by resched_task()
> >          */
> >         smp_mb();
> >
> >         return unlikely(tif_need_resched());
> > }
> >
> > So it already has a (MFENCE) barrier after ->flags is modified.
> 
> So what?
> 
> The mb() isn't necessarily against the *write*, it is also against 
> the *read*.
> 
> If the cflush is needed before the monitor, it's likely because the 
> monitor instruction has some bug with already-existing cachelines in 
> shared state or whatever.

So, my thinking was that maybe it's the other way around: the problem 
is with the write not being drained to cache yet.

One possibility would be that the bug is that MONITOR will not see the 
previous write in the store buffer and when shortly afterwards the 
store hits the cache, it falsely 'wakes up' the MWAIT.

(If so then the race window roughly depends on the the number of 
cycles the current_set_polling_and_test() modification retires and 
explains how small reorganizations of the code triggered the hw bug.)

The CLFLUSH ensures that the modification is visible in the cache 
before MONITOR is run.

If this guess is true then the need_resched() read is immaterial to 
the bug. On the flip side, if my guess is wrong then I'm leaving 
another subtle race window in the code...

So yeah I agree that without more information from Intel we are better 
off with a more conservative approach, the bug took relatively long to 
find, Thomas did the original 7d1a941731fab back in March, 3 kernel 
releases ago ...

If only there were Intel employees on Cc: who could tell us more about 
the background of the bug ;-)

> Which means that we want to make sure that the cflush is ordered wrt 
> *reads* from that cacheline too.
> 
> cflush has nothing specifically to do with writes.

Yes.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ