lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFxAh7b0OENrhnxSnvZE1W9wZpg6WNm3iSYk9=8rWChuJA@mail.gmail.com>
Date:	Tue, 12 Jan 2016 09:05:19 -0800
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Chris Wilson <chris@...is-wilson.co.uk>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andy Lutomirski <luto@...capital.net>,
	"H. Peter Anvin" <hpa@...or.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Ross Zwisler <ross.zwisler@...ux.intel.com>,
	"H . Peter Anvin" <hpa@...ux.intel.com>,
	Borislav Petkov <bp@...en8.de>,
	Brian Gerst <brgerst@...il.com>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Imre Deak <imre.deak@...el.com>,
	Daniel Vetter <daniel.vetter@...ll.ch>,
	DRI <dri-devel@...ts.freedesktop.org>
Subject: Re: [PATCH] x86: Add an explicit barrier() to clflushopt()

On Tue, Jan 12, 2016 at 8:37 AM, Chris Wilson <chris@...is-wilson.co.uk> wrote:
> On Mon, Jan 11, 2016 at 09:05:06PM +0000, Chris Wilson wrote:
>> I can narrow down the principal buggy path by doing the clflush(vend-1)
>> in the callers at least.
>
> That leads to the suspect path being a read back of a cache line from
> main memory that was just written to by the GPU.

How do you know it was written by the GPU?

Maybe it's a memory ordering issue on the GPU. Say it writes something
to memory, then sets the "I'm done" flag (or whatever you check), but
because of ordering on the GPU the "I'm done" flag is visible before.

So the reason you see the old content may just be that the GPU writes
are still buffered on the GPU. And you adding a clflushopt on the same
address just changes the timing enough that you don't see the memory
ordering any more (or it's just much harder to see, it might still be
there).

Maybe the reason you only see the problem with the last cacheline is
simply that the "last" cacheline is also the last that was written by
the GPU, and it's still in the GPU write buffers.

Also, did you ever print out the value of clflush_size? Maybe we just
got it wrong and it's bogus data.

                    Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ