lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 12 Jul 2016 22:01:01 -0400
From:	Mike Snitzer <snitzer@...hat.com>
To:	"Kani, Toshimitsu" <toshi.kani@....com>, axboe@...com
Cc:	"linux-raid@...r.kernel.org" <linux-raid@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"dan.j.williams@...el.com" <dan.j.williams@...el.com>,
	"dm-devel@...hat.com" <dm-devel@...hat.com>,
	"ross.zwisler@...ux.intel.com" <ross.zwisler@...ux.intel.com>,
	"linux-nvdimm@...1.01.org" <linux-nvdimm@...1.01.org>,
	"agk@...hat.com" <agk@...hat.com>
Subject: Re: dm stripe: add DAX support

On Tue, Jul 12 2016 at  6:22pm -0400,
Kani, Toshimitsu <toshi.kani@....com> wrote:

> On Fri, 2016-06-24 at 14:29 -0400, Mike Snitzer wrote:
> > 
> > BTW, if in your testing you could evaluate/quantify any extra overhead
> > from DM that'd be useful to share.  It could be there are bottlenecks
> > that need to be fixed, etc.
> 
> Here are some results from fio benchmark.  The test is single-threaded and is
> bound to one CPU.
> 
>  DAX  LVM   IOPS   NOTE
>  ---------------------------------------
>   Y    N    790K
>   Y    Y    754K   5% overhead with LVM
>   N    N    567K
>   N    Y    457K   20% overhead with LVM
> 
>  DAX: Y: mount -o dax,noatime, N: mount -o noatime
>  LVM: Y: dm-linear on pmem0 device, N: pmem0 device
>  fio: bs=4k, size=2G, direct=1, rw=randread, numjobs=1
> 
> Among the 5% overhead with DAX/LVM, the new DM direct_access interfaces
> account for less than 0.5%.
> 
>  dm_blk_direct_access 0.28%
>  linear_direct_access 0.17%
> 
> The average latency increases slightly from 0.93us to 0.95us.  I think most of
> the overhead comes from the submit_bio() path, which is used only for
> accessing metadata with DAX.  I believe this is due to cloning bio for each
> request in DM.  There is 12% more L2 miss in total.
> 
> Without DAX, 20% overhead is observed with LVM.  Average latency increases
> from 1.39us to 1.82us.  Without DAX, bio is cloned for both data and metadata.

Thanks for putting this summary together.  Unfortunately none of the DM
changes can be queued for 4.8 until Jens takes the 2 block core patches:
https://patchwork.kernel.org/patch/9196021/
https://patchwork.kernel.org/patch/9196019/

Not sure what the hold up and/or issue is with them.  But I've asked
twice (and implicilty a 3rd time here).  Hopefully they land in time for
4.8.

Mike

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ