lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5509C3E7.1030307@parallels.com>
Date:	Wed, 18 Mar 2015 11:28:55 -0700
From:	Maxim Patlasov <mpatlasov@...allels.com>
To:	Ming Lei <ming.lei@...onical.com>
CC:	<linux-kernel@...r.kernel.org>,
	Dave Kleikamp <dave.kleikamp@...cle.com>,
	Jens Axboe <axboe@...nel.dk>, Zach Brown <zab@...bo.net>,
	Christoph Hellwig <hch@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	Benjamin LaHaise <bcrl@...ck.org>
Subject: Re: [PATCH v2 4/4] block: loop: support to submit I/O via kernel
 aio based

On 01/13/2015 07:44 AM, Ming Lei wrote:
> Part of the patch is based on Dave's previous post.
>
> This patch submits I/O to fs via kernel aio, and we
> can obtain following benefits:
>
> 	- double cache in both loop file system and backend file
> 	gets avoided
> 	- context switch decreased a lot, and finally CPU utilization
> 	is decreased
> 	- cached memory got decreased a lot
>
> One main side effect is that throughput is decreased when
> accessing raw loop block(not by filesystem) with kernel aio.
>
> This patch has passed xfstests test(./check -g auto), and
> both test and scratch devices are loop block, file system is ext4.
>
> Follows two fio tests' result:
>
> 1. fio test inside ext4 file system over loop block
> 1) How to run
> 	- linux kernel base: 3.19.0-rc3-next-20150108(loop-mq merged)
> 	- loop over SSD image 1 in ext4
> 	- linux psync, 16 jobs, size 200M, ext4 over loop block
> 	- test result: IOPS from fio output
>
> 2) Throughput result:
> 	-------------------------------------------------------------
> 	test cases          |randread   |read   |randwrite  |write  |
> 	-------------------------------------------------------------
> 	base                |16799      |59508  |31059      |58829
> 	-------------------------------------------------------------
> 	base+kernel aio     |15480      |64453  |30187      |57222
> 	-------------------------------------------------------------

Ming, it's important to understand the overhead of aio_kernel_() 
implementation. So could you please add test results for raw SSD device 
to the table above next time (in v3 of your patches).

Jens, if you have some fast storage at hand, could you please measure 
IOPS for Ming's patches vs. raw block device -- to ensure that the 
patches do not impose too low limit on performance.

Thanks,
Maxim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ