lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 19 Aug 2013 09:59:09 +0800
From:	Miao Xie <miaox@...fujitsu.com>
To:	Fengguang Wu <fengguang.wu@...el.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>, Tao Ma <tm@....ma>,
	Linux Memory Management List <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: readahead: make context readahead more conservative

Hi, everyone

On Thu, 8 Aug 2013 16:54:18 +0800, Fengguang Wu wrote:
> This helps performance on moderately dense random reads on SSD.
> 
> Transaction-Per-Second numbers provided by Taobao:
> 
> 		QPS	case
> 		-------------------------------------------------------
> 		7536	disable context readahead totally
> w/ patch:	7129	slower size rampup and start RA on the 3rd read
> 		6717	slower size rampup
> w/o patch:	5581	unmodified context readahead
> 
> Before, readahead will be started whenever reading page N+1 when it
> happen to read N recently. After patch, we'll only start readahead
> when *three* random reads happen to access pages N, N+1, N+2. The
> probability of this happening is extremely low for pure random reads,
> unless they are very dense, which actually deserves some readahead.
> 
> Also start with a smaller readahead window. The impact to interleaved
> sequential reads should be small, because for a long run stream, the
> the small readahead window rampup phase is negletable.
> 
> The context readahead actually benefits clustered random reads on HDD
> whose seek cost is pretty high. However as SSD is increasingly used
> for random read workloads it's better for the context readahead to
> concentrate on interleaved sequential reads.
> 
> Another SSD rand read test from Miao
> 
>         # file size:        2GB
>         # read IO amount: 625MB
>         sysbench --test=fileio          \
>                 --max-requests=10000    \
>                 --num-threads=1         \
>                 --file-num=1            \
>                 --file-block-size=64K   \
>                 --file-test-mode=rndrd  \
>                 --file-fsync-freq=0     \
>                 --file-fsync-end=off    run
> 
> shows the performance of btrfs grows up from 69MB/s to 121MB/s,
> ext4 from 104MB/s to 121MB/s.

I did the same test on the hard disk recently,
for btrfs, there is ~5% regression(10.65MB/s -> 10.09MB/s),
for ext4, the performance grows up a bit.(9.98MB/s -> 10.04MB/s).
(I run the test for 4 times, and the above result is the average of the test.)

Any comment?

Thanks
Miao

> 
> Tested-by: Tao Ma <tm@....ma>
> Tested-by: Miao Xie <miaox@...fujitsu.com>
> Signed-off-by: Wu Fengguang <fengguang.wu@...el.com>
> ---
>  mm/readahead.c |    8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> --- linux-next.orig/mm/readahead.c	2013-08-08 16:21:29.675286154 +0800
> +++ linux-next/mm/readahead.c	2013-08-08 16:21:33.851286019 +0800
> @@ -371,10 +371,10 @@ static int try_context_readahead(struct
>  	size = count_history_pages(mapping, ra, offset, max);
>  
>  	/*
> -	 * no history pages:
> +	 * not enough history pages:
>  	 * it could be a random read
>  	 */
> -	if (!size)
> +	if (size <= req_size)
>  		return 0;
>  
>  	/*
> @@ -385,8 +385,8 @@ static int try_context_readahead(struct
>  		size *= 2;
>  
>  	ra->start = offset;
> -	ra->size = get_init_ra_size(size + req_size, max);
> -	ra->async_size = ra->size;
> +	ra->size = min(size + req_size, max);
> +	ra->async_size = 1;
>  
>  	return 1;
>  }
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ