lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 9 May 2017 10:10:57 -0700
From:   Dan Williams <dan.j.williams@...el.com>
To:     Ben Hutchings <ben.hutchings@...ethink.co.uk>
Cc:     Ross Zwisler <ross.zwisler@...ux.intel.com>,
        Toshi Kani <toshi.kani@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>,
        X86 ML <x86@...nel.org>, Jan Kara <jack@...e.cz>,
        Jeff Moyer <jmoyer@...hat.com>, Ingo Molnar <mingo@...hat.com>,
        Christoph Hellwig <hch@....de>,
        "H. Peter Anvin" <hpa@...or.com>,
        Al Viro <viro@...iv.linux.org.uk>,
        Thomas Gleixner <tglx@...utronix.de>,
        Matthew Wilcox <mawilcox@...rosoft.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH 4.4 26/28] x86, pmem: fix broken __copy_user_nocache
 cache-bypass assumptions

On Tue, May 9, 2017 at 10:00 AM, Ben Hutchings
<ben.hutchings@...ethink.co.uk> wrote:
> On Tue, 2017-04-25 at 16:08 +0100, Greg Kroah-Hartman wrote:
>> 4.4-stable review patch.  If anyone has any objections, please let me know.
>>
>> ------------------
>>
>> From: Dan Williams <dan.j.williams@...el.com>
>>
>> commit 11e63f6d920d6f2dfd3cd421e939a4aec9a58dcd upstream.
> [...]
>> +     if (iter_is_iovec(i)) {
>> +             unsigned long flushed, dest = (unsigned long) addr;
>> +
>> +             if (bytes < 8) {
>> +                     if (!IS_ALIGNED(dest, 4) || (bytes != 4))
>> +                             __arch_wb_cache_pmem(addr, 1);
> [...]
>
> What if the write crosses a cache line boundary?  I think you need the
> following fix-up (untested, I don't have this kind of hardware).
>
> Ben.
>
> ---
> From: Ben Hutchings <ben.hutchings@...ethink.co.uk>
> Subject: x86, pmem: Fix cache flushing for iovec write < 8 bytes
>
> Commit 11e63f6d920d added cache flushing for unaligned writes from an
> iovec, covering the first and last cache line of a >= 8 byte write and
> the first cache line of a < 8 byte write.  But an unaligned write of
> 2-7 bytes can still cover two cache lines, so make sure we flush both
> in that case.
>
> Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...")
> Signed-off-by: Ben Hutchings <ben.hutchings@...ethink.co.uk>
> ---
>  arch/x86/include/asm/pmem.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h
> index d5a22bac9988..0ff8fe71b255 100644
> --- a/arch/x86/include/asm/pmem.h
> +++ b/arch/x86/include/asm/pmem.h
> @@ -98,7 +98,7 @@ static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes,
>
>                 if (bytes < 8) {
>                         if (!IS_ALIGNED(dest, 4) || (bytes != 4))
> -                               arch_wb_cache_pmem(addr, 1);
> +                               arch_wb_cache_pmem(addr, bytes);

Yes, this looks correct to me. I deeply appreciate your attention to
detail, Ben.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ