lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 Jan 2014 15:57:57 +0530
From:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To:	Jan Kara <jack@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>,
	Linus <torvalds@...ux-foundation.org>
CC:	Fengguang Wu <fengguang.wu@...el.com>,
	David Cohen <david.a.cohen@...ux.intel.com>,
	Al Viro <viro@...iv.linux.org.uk>,
	Damien Ramonda <damien.ramonda@...el.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH V4] mm readahead: Fix readahead fail for no local
 memory and limit readahead pages

On 01/10/2014 03:22 PM, Jan Kara wrote:
> On Fri 10-01-14 09:36:56, Jan Kara wrote:
>> On Fri 10-01-14 00:54:50, Raghavendra K T wrote:
>>> We limit the number of readahead pages to 4k.
>>>
>>> max_sane_readahead returns zero on the cpu having no local memory
>>> node. Fix that by returning a sanitized number of pages viz.,
>>> minimum of (requested pages, 4k, number of local free pages)
>>>
>>> Result:
>>> fadvise experiment with FADV_WILLNEED on a x240 machine with 1GB testfile
>>> 32GB* 4G RAM  numa machine ( 12 iterations) yielded
>>>
>>> kernel       Avg        Stddev
>>> base         7.264      0.56%
>>> patched      7.285      1.14%
>>    OK, looks good to me. You can add:
>> Reviewed-by: Jan Kara <jack@...e.cz>
>    Hum, while doing some other work I've realized there may be still a
> problem hiding with the 16 MB limitation. E.g. the dynamic linker is
> doing MADV_WILLNEED on the shared libraries. If the library (or executable)
> is larger than 16 MB, then it may cause performance problems since access
> is random in nature and we don't really know which part of the file do we
> need first.
>
> I'm not sure what others think about this but I'm now more inclined to a
> bit more careful and introduce the 16 MB limit only for the NUMA case. I.e.
> something like:

Your suggestion makes sense. I do not have any strong preference.
may be we shall wait for Linus/Andrew's comments (if any) since Linus
suggested the 16MB idea.

>
> 	unsigned long local_free_page;
> 	int nid;
>
> 	nid = numa_node_id();
> 	if (node_present_pages(nid)) {
> 		/*
> 		 * We sanitize readahead size depending on free memory in
> 		 * the local node.
> 		 */
> 		local_free_page = node_page_state(nid, NR_INACTIVE_FILE)
> 				  + node_page_state(nid, NR_FREE_PAGES);
> 		return min(nr, local_free_page / 2);
> 	}
> 	/*
> 	 * Readahead onto remote memory is better than no readahead when local
> 	 * numa node does not have memory. We limit the readahead to 4k
> 	 * pages though to avoid trashing page cache.
> 	 */
> 	return min(nr, MAX_REMOTE_READAHEAD);
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ