lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B8776FC.30409@linux.vnet.ibm.com>
Date:	Fri, 26 Feb 2010 08:23:40 +0100
From:	Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
To:	Wu Fengguang <fengguang.wu@...el.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Jens Axboe <jens.axboe@...cle.com>,
	Matt Mackall <mpm@...enic.com>,
	Chris Mason <chris.mason@...cle.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Clemens Ladisch <clemens@...isch.de>,
	Olivier Galibert <galibert@...ox.com>,
	Vivek Goyal <vgoyal@...hat.com>, Nick Piggin <npiggin@...e.de>,
	Linux Memory Management List <linux-mm@...ck.org>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 05/15] readahead: limit readahead size for small memory
 systems

Unfortunately without a chance to measure this atm, this patch now looks 
really good to me.
Thanks for adapting it to a read-ahead only per mem limit.
Acked-by: Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>


Wu Fengguang wrote:
> On Thu, Feb 25, 2010 at 11:25:54PM +0800, Christian Ehrhardt wrote:
>>
>> Wu Fengguang wrote:
>>  > When lifting the default readahead size from 128KB to 512KB,
>>  > make sure it won't add memory pressure to small memory systems.
>>  >
>>  > For read-ahead, the memory pressure is mainly readahead buffers consumed
>>  > by too many concurrent streams. The context readahead can adapt
>>  > readahead size to thrashing threshold well.  So in principle we don't
>>  > need to adapt the default _max_ read-ahead size to memory pressure.
>>  >
>>  > For read-around, the memory pressure is mainly read-around misses on
>>  > executables/libraries. Which could be reduced by scaling down
>>  > read-around size on fast "reclaim passes".
>>  >
>>  > This patch presents a straightforward solution: to limit default
>>  > readahead size proportional to available system memory, ie.
>>  >                 512MB mem => 512KB readahead size
>>  >                 128MB mem => 128KB readahead size
>>  >                  32MB mem =>  32KB readahead size (minimal)
>>  >
>>  > Strictly speaking, only read-around size has to be limited.  However we
>>  > don't bother to seperate read-around size from read-ahead size for now.
>>  >
>>  > CC: Matt Mackall <mpm@...enic.com>
>>  > Signed-off-by: Wu Fengguang <fengguang.wu@...el.com>
>>
>> What I state here is for read ahead in a "multi iozone sequential" 
>> setup, I can't speak for real "read around" workloads.
>> So probably your table is fine to cover read-around+read-ahead in one 
>> number.
> 
> OK.
> 
>> I have tested 256MB mem systems with 512kb readahead quite a lot.
>> On those 512kb is still by far superior to smaller readaheads and I 
>> didn't see major trashing or memory pressure impact.
> 
> In fact I'd expect a 64MB box to also benefit from 512kb readahead :)
> 
>> Therefore I would recommend a table like:
>>                 >=256MB mem => 512KB readahead size
>>                   128MB mem => 128KB readahead size
>>                    32MB mem =>  32KB readahead size (minimal)
> 
> So, I'm fed up with compromising the read-ahead size with read-around
> size.
> 
> There is no good to introduce a read-around size to confuse the user
> though.  Instead, I'll introduce a read-around size limit _on top of_
> the readahead size. This will allow power users to adjust
> read-ahead/read-around size at the same time, while saving the low end
> from unnecessary memory pressure :) I made the assumption that low end
> users have no need to request a large read-around size.
> 
> Thanks,
> Fengguang
> ---
> readahead: limit read-ahead size for small memory systems
> 
> When lifting the default readahead size from 128KB to 512KB,
> make sure it won't add memory pressure to small memory systems.
> 
> For read-ahead, the memory pressure is mainly readahead buffers consumed
> by too many concurrent streams. The context readahead can adapt
> readahead size to thrashing threshold well.  So in principle we don't
> need to adapt the default _max_ read-ahead size to memory pressure.
> 
> For read-around, the memory pressure is mainly read-around misses on
> executables/libraries. Which could be reduced by scaling down
> read-around size on fast "reclaim passes".
> 
> This patch presents a straightforward solution: to limit default
> read-ahead size proportional to available system memory, ie.
>                 512MB mem => 512KB readahead size
>                 128MB mem => 128KB readahead size
>                  32MB mem =>  32KB readahead size
> 
> CC: Matt Mackall <mpm@...enic.com>
> CC: Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
> Signed-off-by: Wu Fengguang <fengguang.wu@...el.com>
> ---
>  mm/filemap.c   |    2 +-
>  mm/readahead.c |   22 ++++++++++++++++++++++
>  2 files changed, 23 insertions(+), 1 deletion(-)
> 
> --- linux.orig/mm/filemap.c	2010-02-26 10:04:28.000000000 +0800
> +++ linux/mm/filemap.c	2010-02-26 10:08:33.000000000 +0800
> @@ -1431,7 +1431,7 @@ static void do_sync_mmap_readahead(struc
>  	/*
>  	 * mmap read-around
>  	 */
> -	ra_pages = max_sane_readahead(ra->ra_pages);
> +	ra_pages = min(ra->ra_pages, roundup_pow_of_two(totalram_pages / 1024));
>  	if (ra_pages) {
>  		ra->start = max_t(long, 0, offset - ra_pages/2);
>  		ra->size = ra_pages;

-- 

GrĂ¼sse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ