[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4jfUVXoge5D+cBY1Ph=t60165sp6sF_QFZUbFv+cNcdHg@mail.gmail.com>
Date: Mon, 2 May 2016 10:53:25 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Jeff Moyer <jmoyer@...hat.com>
Cc: Dave Chinner <david@...morbit.com>,
"Verma, Vishal L" <vishal.l.verma@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"hch@...radead.org" <hch@...radead.org>,
"xfs@....sgi.com" <xfs@....sgi.com>,
"linux-nvdimm@...1.01.org" <linux-nvdimm@...1.01.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"axboe@...com" <axboe@...com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
"Wilcox, Matthew R" <matthew.r.wilcox@...el.com>,
"jack@...e.cz" <jack@...e.cz>
Subject: Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io
On Mon, May 2, 2016 at 8:18 AM, Jeff Moyer <jmoyer@...hat.com> wrote:
> Dave Chinner <david@...morbit.com> writes:
[..]
>> We need some form of redundancy and correction in the PMEM stack to
>> prevent single sector errors from taking down services until an
>> administrator can correct the problem. I'm trying to understand
>> where this is supposed to fit into the picture - at this point I
>> really don't think userspace applications are going to be able to do
>> this reliably....
>
> Not all storage is configured into a RAID volume, and in some instances,
> the application is better positioned to recover the data (gluster/ceph,
> for example). It really comes down to whether applications or libraries
> will want to implement redundancy themselves in order to get a bump in
> performance by not going through the kernel. And I think I know what
> your opinion is on that front. :-)
>
> Speaking of which, did you see the numbers Dan shared at LSF on how much
> overhead there is in calling into the kernel for syncing? Dan, can/did
> you publish that spreadsheet somewhere?
Here it is:
https://docs.google.com/spreadsheets/d/1pwr9psy6vtB9DOsc2bUdXevJRz5Guf6laZ4DaZlkhoo/edit?usp=sharing
On the "Filtered" tab I have some of the comparisons where:
noop => don't call msync and don't flush caches in userspace
persist => cache flushing only in userspace and only on individual cache lines
persist_4k => cache flushing only in userspace, but flushing is
performed in 4K aligned units
msync => same granularity flushing as the 'persist' case, but the
kernel internally promotes this to a 4K sized / aligned flush
msync_0 => synthetic case where msync() returns immediately and does
no other work
The takeaway is that msync() is 9-10x slower than userspace cache management.
Let me know if there are any questions and I can add an NVML developer
to this thread...
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists