[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4h7CXenpDkb=d3RN8wo-rSfrQ_ioLoBPV3U+a_r3W9CEg@mail.gmail.com>
Date: Thu, 16 Feb 2017 19:56:34 -0800
From: Dan Williams <dan.j.williams@...el.com>
To: Ross Zwisler <ross.zwisler@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
Jan Kara <jack@...e.cz>,
Matthew Wilcox <mawilcox@...rosoft.com>,
X86 ML <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Christoph Hellwig <hch@....de>, Jeff Moyer <jmoyer@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Al Viro <viro@...iv.linux.org.uk>,
"H. Peter Anvin" <hpa@...or.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 03/13] x86, dax, pmem: introduce 'copy_from_iter' dax operation
On Thu, Feb 16, 2017 at 7:52 PM, Ross Zwisler
<ross.zwisler@...ux.intel.com> wrote:
> On Thu, Jan 19, 2017 at 07:50:29PM -0800, Dan Williams wrote:
>> The direct-I/O write path for a pmem device must ensure that data is flushed
>> to a power-fail safe zone when the operation is complete. However, other
>> dax capable block devices, like brd, do not have this requirement.
>> Introduce a 'copy_from_iter' dax operation so that pmem can inject
>> cache management without imposing this overhead on other dax capable
>> block_device drivers.
>>
>> Cc: <x86@...nel.org>
>> Cc: Jan Kara <jack@...e.cz>
>> Cc: Jeff Moyer <jmoyer@...hat.com>
>> Cc: Ingo Molnar <mingo@...hat.com>
>> Cc: Christoph Hellwig <hch@....de>
>> Cc: "H. Peter Anvin" <hpa@...or.com>
>> Cc: Al Viro <viro@...iv.linux.org.uk>
>> Cc: Thomas Gleixner <tglx@...utronix.de>
>> Cc: Matthew Wilcox <mawilcox@...rosoft.com>
>> Cc: Ross Zwisler <ross.zwisler@...ux.intel.com>
>> Signed-off-by: Dan Williams <dan.j.williams@...el.com>
>> ---
>> arch/x86/include/asm/pmem.h | 31 -------------------------------
>> drivers/nvdimm/pmem.c | 10 ++++++++++
>> fs/dax.c | 11 ++++++++++-
>> include/linux/blkdev.h | 1 +
>> include/linux/pmem.h | 24 ------------------------
>> 5 files changed, 21 insertions(+), 56 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h
>> index f26ba430d853..0ca5e693f4a2 100644
>> --- a/arch/x86/include/asm/pmem.h
>> +++ b/arch/x86/include/asm/pmem.h
>> @@ -64,37 +64,6 @@ static inline void arch_wb_cache_pmem(void *addr, size_t size)
>> clwb(p);
>> }
>>
>> -/*
>> - * copy_from_iter_nocache() on x86 only uses non-temporal stores for iovec
>> - * iterators, so for other types (bvec & kvec) we must do a cache write-back.
>> - */
>> -static inline bool __iter_needs_pmem_wb(struct iov_iter *i)
>> -{
>> - return iter_is_iovec(i) == false;
>> -}
>> -
>> -/**
>> - * arch_copy_from_iter_pmem - copy data from an iterator to PMEM
>> - * @addr: PMEM destination address
>> - * @bytes: number of bytes to copy
>> - * @i: iterator with source data
>> - *
>> - * Copy data from the iterator 'i' to the PMEM buffer starting at 'addr'.
>> - */
>> -static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes,
>> - struct iov_iter *i)
>> -{
>> - size_t len;
>> -
>> - /* TODO: skip the write-back by always using non-temporal stores */
>> - len = copy_from_iter_nocache(addr, bytes, i);
>> -
>> - if (__iter_needs_pmem_wb(i))
>> - arch_wb_cache_pmem(addr, bytes);
>
> This writeback is no longer conditional in the pmem_copy_from_iter() version,
> which means that for iovec iterators you do a non-temporal store and then
> afterwards take the time to loop through and flush the cachelines? This seems
> incorrect, and I wonder if this could be the cause of the performance
> regression reported by 0-day?
I'm pretty sure you're right. What I was planning for the next version
of this patch is to handle the unaligned case in the local assembly so
that we never need to do a flush loop after the fact.
Powered by blists - more mailing lists