[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1494349243.3965.21.camel@codethink.co.uk>
Date: Tue, 09 May 2017 18:00:43 +0100
From: Ben Hutchings <ben.hutchings@...ethink.co.uk>
To: Dan Williams <dan.j.williams@...el.com>,
Ross Zwisler <ross.zwisler@...ux.intel.com>,
Toshi Kani <toshi.kani@....com>
Cc: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
x86@...nel.org, Jan Kara <jack@...e.cz>,
Jeff Moyer <jmoyer@...hat.com>, Ingo Molnar <mingo@...hat.com>,
Christoph Hellwig <hch@....de>,
"H. Peter Anvin" <hpa@...or.com>,
Al Viro <viro@...iv.linux.org.uk>,
Thomas Gleixner <tglx@...utronix.de>,
Matthew Wilcox <mawilcox@...rosoft.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH 4.4 26/28] x86, pmem: fix broken __copy_user_nocache
cache-bypass assumptions
On Tue, 2017-04-25 at 16:08 +0100, Greg Kroah-Hartman wrote:
> 4.4-stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Dan Williams <dan.j.williams@...el.com>
>
> commit 11e63f6d920d6f2dfd3cd421e939a4aec9a58dcd upstream.
[...]
> + if (iter_is_iovec(i)) {
> + unsigned long flushed, dest = (unsigned long) addr;
> +
> + if (bytes < 8) {
> + if (!IS_ALIGNED(dest, 4) || (bytes != 4))
> + __arch_wb_cache_pmem(addr, 1);
[...]
What if the write crosses a cache line boundary? I think you need the
following fix-up (untested, I don't have this kind of hardware).
Ben.
---
From: Ben Hutchings <ben.hutchings@...ethink.co.uk>
Subject: x86, pmem: Fix cache flushing for iovec write < 8 bytes
Commit 11e63f6d920d added cache flushing for unaligned writes from an
iovec, covering the first and last cache line of a >= 8 byte write and
the first cache line of a < 8 byte write. But an unaligned write of
2-7 bytes can still cover two cache lines, so make sure we flush both
in that case.
Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...")
Signed-off-by: Ben Hutchings <ben.hutchings@...ethink.co.uk>
---
arch/x86/include/asm/pmem.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h
index d5a22bac9988..0ff8fe71b255 100644
--- a/arch/x86/include/asm/pmem.h
+++ b/arch/x86/include/asm/pmem.h
@@ -98,7 +98,7 @@ static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes,
if (bytes < 8) {
if (!IS_ALIGNED(dest, 4) || (bytes != 4))
- arch_wb_cache_pmem(addr, 1);
+ arch_wb_cache_pmem(addr, bytes);
} else {
if (!IS_ALIGNED(dest, 8)) {
dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
--
Ben Hutchings
Software Developer, Codethink Ltd.
Powered by blists - more mailing lists