lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 20 Feb 2012 20:21:14 +0400
From:	Konstantin Khlebnikov <khlebnikov@...nvz.org>
To:	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC:	Rik van Riel <riel@...hat.com>, Hugh Dickins <hughd@...gle.com>,
	Mel Gorman <mgorman@...e.de>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Johannes Weiner <jweiner@...hat.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [PATCH 2/3] mm: replace per-cpu lru-add page-vectors with page-lists

Konstantin Khlebnikov wrote:
> This patch replaces page-vectors with page-lists in lru_cache_add*() functions.
> We can use page->lru for linking because page obviously not in lru.
>
> Now per-cpu batch limited with its pages total size, not pages count,
> otherwise it can be extremely huge if there many huge-pages inside:
> PAGEVEC_SIZE * HPAGE_SIZE = 28Mb, per-cpu!
> These pages are hidden from memory reclaimer for a while.
> New limit: LRU_CACHE_ADD_BATCH = 64 (* PAGE_SIZE = 256Kb)
>
> So, huge-page adding now will always drain per-cpu list. Huge-page allocation
> and preparation is long procedure, thus nobody will notice this draining.
>
> Draining procedure disables preemption only for pages list isolation,
> thus batch size can be increased without negative effect for latency.
>
> Plus this patch introduces new function lru_cache_add_list() and use it in
> mpage_readpages() and read_pages(). There pages already collected in list.
> Unlike to single-page lru-add, list-add reuse page-referencies from caller,
> thus we save one page_get()/page_put() per page.
>
> Signed-off-by: Konstantin Khlebnikov<khlebnikov@...nvz.org>
> ---
>   fs/mpage.c           |   21 +++++++----
>   include/linux/swap.h |    2 +
>   mm/readahead.c       |   15 +++++---
>   mm/swap.c            |   99 +++++++++++++++++++++++++++++++++++++++++++++-----
>   4 files changed, 114 insertions(+), 23 deletions(-)
>

>
>   	pvec =&per_cpu(lru_rotate_pvecs, cpu);
> @@ -765,6 +841,11 @@ EXPORT_SYMBOL(pagevec_lookup_tag);
>   void __init swap_setup(void)
>   {
>   	unsigned long megs = totalram_pages>>  (20 - PAGE_SHIFT);
> +	int cpu, lru;
> +
> +	for_each_possible_cpu(cpu)
> +		for_each_lru(lru)
> +			INIT_LIST_HEAD(per_cpu(lru_add_pages, cpu) + lru);

As I afraid, here is is too late for this initialization.
This must be in core-initcall.

I'll send v2 together with update lru-lock splitting rebased to linux-next.

>
>   #ifdef CONFIG_SWAP
>   	bdi_init(swapper_space.backing_dev_info);
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ