[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170103232206.GD1555@ZenIV.linux.org.uk>
Date: Tue, 3 Jan 2017 23:22:06 +0000
From: Al Viro <viro@...IV.linux.org.uk>
To: Dan Williams <dan.j.williams@...el.com>
Cc: "Elliott, Robert (Persistent Memory)" <elliott@....com>,
Boaz Harrosh <boaz@...xistor.com>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
"Moreno, Oliver" <oliver.moreno@....com>,
"x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
"boylston@...romesa.net" <boylston@...romesa.net>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [RFC] memcpy_nocache() and memcpy_writethrough()
On Tue, Jan 03, 2017 at 01:14:11PM -0800, Dan Williams wrote:
> Robert was describing the overall flow / mechanics, but I think it is
> easier to visualize the sfence as a flush command sent to a disk
> device with a volatile cache. In fact, that's how we implemented it in
> the pmem block device driver. The pmem block device registers itself
> as requiring REQ_FLUSH to be sent to persist writes. The driver issues
> sfence on the assumption that all writes to pmem have either bypassed
> the cache with movnt, or are scheduled for write-back via one of the
> flush instructions (clflush, clwb, or clflushopt).
*blink*
1) memcpy_to_pmem() seems to rely upon the __copy_from_user_nocache()
having only used movnt; it does not attempt clwb at all.
2) __copy_from_user_nocache() for short copies does not use movnt at all.
In that case neither sfence nor clwb is issued.
3) it uses movnt only for part of copying in case of misaligned copy;
No clwb is issued, but sfence *is* - at the very end in 64bit case,
between movnt and copying the tail - in 32bit one. Incidentally,
while 64bit case takes care to align the destination for movnt part,
32bit one does not.
How much of the above is broken and what do the callers rely upon? In
particular, is that sfence the right thing for pmem usecases?
Powered by blists - more mailing lists