lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Jul 2012 12:19:20 +0900
From:	Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	Tim Chen <tim.c.chen@...ux.intel.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mel@....ul.ie>, Minchan Kim <minchan@...nel.org>,
	Johannes Weiner <hannes@...xchg.org>,
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
	"andi.kleen" <andi.kleen@...el.com>, linux-mm <linux-mm@...ck.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Cgroup: Fix memory accounting scalability in shrink_page_list

(2012/07/20 8:34), Tim Chen wrote:
> Hi,
>
> I noticed in a multi-process parallel files reading benchmark I ran on a
> 8 socket machine,  throughput slowed down by a factor of 8 when I ran
> the benchmark within a cgroup container.  I traced the problem to the
> following code path (see below) when we are trying to reclaim memory
> from file cache.  The res_counter_uncharge function is called on every
> page that's reclaimed and created heavy lock contention.  The patch
> below allows the reclaimed pages to be uncharged from the resource
> counter in batch and recovered the regression.
>
> Tim
>
>       40.67%           usemem  [kernel.kallsyms]                   [k] _raw_spin_lock
>                        |
>                        --- _raw_spin_lock
>                           |
>                           |--92.61%-- res_counter_uncharge
>                           |          |
>                           |          |--100.00%-- __mem_cgroup_uncharge_common
>                           |          |          |
>                           |          |          |--100.00%-- mem_cgroup_uncharge_cache_page
>                           |          |          |          __remove_mapping
>                           |          |          |          shrink_page_list
>                           |          |          |          shrink_inactive_list
>                           |          |          |          shrink_mem_cgroup_zone
>                           |          |          |          shrink_zone
>                           |          |          |          do_try_to_free_pages
>                           |          |          |          try_to_free_pages
>                           |          |          |          __alloc_pages_nodemask
>                           |          |          |          alloc_pages_current
>
>

Thank you very much !!

When I added batching, I didn't touch page-reclaim path because it delays
res_counter_uncharge() and make more threads run into page reclaim.
But, from above score, bactching seems required.

And because of current design of per-zone-per-memcg-LRU, batching
works very very well....all lru pages shrink_page_list() scans are on
the same memcg.

BTW, it's better to show 'how much improved' in patch description..


> ---
> Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 33dc256..aac5672 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -779,6 +779,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>
>   	cond_resched();
>
> +	mem_cgroup_uncharge_start();
>   	while (!list_empty(page_list)) {
>   		enum page_references references;
>   		struct address_space *mapping;
> @@ -1026,6 +1027,7 @@ keep_lumpy:
>
>   	list_splice(&ret_pages, page_list);
>   	count_vm_events(PGACTIVATE, pgactivate);
> +	mem_cgroup_uncharge_end();

I guess placing mem_cgroup_uncharge_end() just after the loop may be better looking.

Anyway,
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>

But please show 'how much improved' in patch description.

Thanks,
-Kame

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ