[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CS1PR84MB0119D6AB03D91248DAFE0E538EDA0@CS1PR84MB0119.NAMPRD84.PROD.OUTLOOK.COM>
Date: Tue, 11 Oct 2016 13:26:07 +0000
From: "Boylston, Brian" <brian.boylston@....com>
To: "Kani, Toshimitsu" <toshi.kani@....com>,
"viro@...IV.linux.org.uk" <viro@...IV.linux.org.uk>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"x86@...nel.org" <x86@...nel.org>,
"dan.j.williams@...el.com" <dan.j.williams@...el.com>,
"hpa@...or.com" <hpa@...or.com>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
"Moreno, Oliver" <oliver.moreno@....com>,
"mingo@...hat.com" <mingo@...hat.com>,
"ross.zwisler@...ux.intel.com" <ross.zwisler@...ux.intel.com>
Subject: RE: [PATCH] use a nocache copy for bvecs in copy_from_iter_nocache()
Kani, Toshimitsu wrote on 2016-10-10:
> On Fri, 2016-10-07 at 18:08 +0100, Al Viro wrote:
>> On Fri, Oct 07, 2016 at 10:55:11AM -0500, Brian Boylston wrote:
>>>
>>> copy_from_iter_nocache() is only "nocache" for iovecs. Enhance it
>>> to also use a nocache copy for bvecs. This improves performance by
>>> 2-3X when splice()ing to a file in a DAX-mounted, pmem-backed file
>>> system.
>>
>>>
>>> +static void memcpy_from_page_nocache(char *to, struct page *page,
>>> size_t offset, size_t len)
>>> +{
>>> + char *from = kmap_atomic(page);
>>> + __copy_from_user_inatomic_nocache(to, from, len);
>>> + kunmap_atomic(from);
>>> +}
>>
>> At the very least, it will blow up on any architecture with split
>> userland and kernel MMU contexts. You *can't* feed a kernel pointer
>> to things like that and expect it to work. At the very least, you
>> need to add memcpy_nocache() and have it default to memcpy(), with
>> non-dummy version on x86. And use _that_, rather than messing with
>> __copy_from_user_inatomic_nocache()
>
> Good point. I think we can add memcpy_nocache() which calls
> __copy_from_user_inatomic_nocache() on x86 and defauts to memcpy() on
> other architectures.
Thanks, Al and Toshi, for the feedback. I'll re-work and come back.
Brian
Powered by blists - more mailing lists