[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1497489635.9288.18.camel@hpe.com>
Date: Thu, 15 Jun 2017 01:21:08 +0000
From: "Kani, Toshimitsu" <toshi.kani@....com>
To: "dan.j.williams@...el.com" <dan.j.williams@...el.com>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>
CC: "dm-devel@...hat.com" <dm-devel@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"hch@....de" <hch@....de>, "x86@...nel.org" <x86@...nel.org>,
"snitzer@...hat.com" <snitzer@...hat.com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH v3 02/14] dm: add ->copy_from_iter() dax operation support
On Wed, 2017-06-14 at 18:45 -0600, Toshi Kani wrote:
> On Fri, 2017-06-09 at 13:23 -0700, Dan Williams wrote:
> > Allow device-mapper to route copy_from_iter operations to the
> > per-target implementation. In order for the device stacking to work
> > we need a dax_dev and a pgoff relative to that device. This gives
> > each layer of the stack the information it needs to look up the
> > operation pointer for the next level.
> >
> > This conceptually allows for an array of mixed device drivers with
> > varying copy_from_iter implementations.
> >
> > Cc: Toshi Kani <toshi.kani@....com>
> > Reviewed-by: Mike Snitzer <snitzer@...hat.com>
> > Signed-off-by: Dan Williams <dan.j.williams@...el.com>
>
> I was worried about possible overhead with additional stub calls, but
> it looks fine with a single thread fio write test with direct=1.
>
> 92.62% [kernel.kallsyms] [k] __copy_user_nocache
> 0.04% [kernel.kallsyms] [k] entry_SYSCALL_64_fastpath
> 0.08% libpthread-2.22.so [.] __GI___libc_write
> 0.01% [kernel.kallsyms] [k] sys_write
> 0.02% [kernel.kallsyms] [k] vfs_write
> 0.02% [kernel.kallsyms] [k] __vfs_write
> 0.02% [kernel.kallsyms] [k] ext4_file_write_iter
> 0.02% [kernel.kallsyms] [k] dax_iomap_rw
> 0.03% [kernel.kallsyms] [k] iomap_apply
> 0.04% [kernel.kallsyms] [k] dax_iomap_actor
> 0.01% [kernel.kallsyms] [k] dax_copy_from_iter
> 0.01% [kernel.kallsyms] [k] dm_dax_copy_from_iter
> 0.01% [kernel.kallsyms] [k] linear_dax_copy_from_iter
> 0.03% [kernel.kallsyms] [k] copy_from_iter_flushcache
> 0.00% [kernel.kallsyms] [k] pmem_copy_from_iter
I had bs=256k, which was too big for this test. bs=4k result is not
this pretty at all, only 23% in __copy_user_nocache. This change
accounts for approx. 1% with 4k. Given we have larger overheads in
many other functions in the path, the change looks acceptable (I keep
my review-by). I'd prefer to reduce code in the path, though.
Thanks,
-Toshi
Powered by blists - more mailing lists