lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130326122713.GC27610@agk-dp.fab.redhat.com>
Date:	Tue, 26 Mar 2013 12:27:13 +0000
From:	Alasdair G Kergon <agk@...hat.com>
To:	Mikulas Patocka <mpatocka@...hat.com>
Cc:	Mike Snitzer <msnitzer@...hat.com>, dm-devel@...hat.com,
	Andi Kleen <andi@...stfloor.org>, dm-crypt@...ut.de,
	Milan Broz <gmazyland@...il.com>, linux-kernel@...r.kernel.org,
	Christoph Hellwig <hch@...radead.org>,
	Christian Schmidt <schmidt@...add.de>
Subject: Re: [dm-devel] dm-crypt performance

[Adding dm-crypt + linux-kernel]

On Mon, Mar 25, 2013 at 11:47:22PM -0400, Mikulas Patocka wrote:
> I performed some dm-crypt performance tests as Mike suggested.
> 
> It turns out that unbound workqueue performance has improved somewhere 
> between kernel 3.2 (when I made the dm-crypt patches) and 3.8, so the 
> patches for hand-built dispatch are no longer needed.
> 
> For RAID-0 composed of two disks with total throughput 260MB/s, the 
> unbound workqueue performs as well as the hand-built dispatch (both 
> sustain the 260MB/s transfer rate).
> 
> For ramdisk, unbound workqueue performs better than hand-built dispatch 
> (620MB/s vs 400MB/s). Unbound workqueue with the patch that Mike suggested 
> (git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git) improves 
> performance slighlty on ramdisk compared to 3.8 (700MB/s vs. 620MB/s).
> 
> 
> 
> However, there is still the problem with request ordering. Milan found out 
> that under some circumstances parallel dm-crypt has worse performance than 
> the previous dm-crypt code. I found out that this is not caused by 
> deficiencies in the code that distributes work to individual processors. 
> Performance drop is caused by the fact that distributing write bios to 
> multiple processors causes the encryption to finish out of order and the 
> I/O scheduler is unable to merge these out-of-order bios.
> 
> The deadline and noop schedulers perform better (only 50% slowdown 
> compared to old dm-crypt), CFQ performs very badly (8 times slowdown).
> 
> 
> If I sort the requests in dm-crypt to come out in the same order as they 
> were received, there is no longer any slowdown, the new crypt performs as 
> well as the old crypt, but the last time I submitted the patches, people 
> objected to sorting requests in dm-crypt, saying that the I/O scheduler 
> should sort them. But it doesn't. This problem still persists in the 
> current kernels.
> 
> 
> For best performance we could use the unbound workqueue implementation 
> with request sorting, if people don't object to the request sorting being 
> done in dm-crypt.


On Tue, Mar 26, 2013 at 02:52:29AM -0400, Christoph Hellwig wrote:
> FYI, XFS also does it's own request ordering for the metadata buffers,
> because it knows the needed ordering and has a bigger view than than
> than especially CFQ.  You at least have precedence in a widely used
> subsystem for this code.


So please post this updated version of the patches for a wider group of
people to try out.

Alasdair

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ