lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 9 Jul 2012 17:32:27 +0200
From:	Michal Hocko <mhocko@...e.cz>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Hugh Dickins <hughd@...gle.com>,
	David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
	cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 10/11] mm: memcg: only check swap cache pages for
 repeated charging

On Thu 05-07-12 02:45:02, Johannes Weiner wrote:
> Only anon and shmem pages in the swap cache are attempted to be
> charged multiple times, from every swap pte fault or from
> shmem_unuse().  No other pages require checking PageCgroupUsed().
> 
> Charging pages in the swap cache is also serialized by the page lock,
> and since both the try_charge and commit_charge are called under the
> same page lock section, the PageCgroupUsed() check might as well
> happen before the counter charging, let alone reclaim.
> 
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>

Acked-by: Michal Hocko <mhocko@...e.cz>

> ---
>  mm/memcontrol.c |   17 ++++++++++++-----
>  1 files changed, 12 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index a8bf86a..d3701cd 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2471,11 +2471,7 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg,
>  	bool anon;
>  
>  	lock_page_cgroup(pc);
> -	if (unlikely(PageCgroupUsed(pc))) {
> -		unlock_page_cgroup(pc);
> -		__mem_cgroup_cancel_charge(memcg, nr_pages);
> -		return;
> -	}
> +	VM_BUG_ON(PageCgroupUsed(pc));
>  	/*
>  	 * we don't need page_cgroup_lock about tail pages, becase they are not
>  	 * accessed by any other context at this point.
> @@ -2740,8 +2736,19 @@ static int __mem_cgroup_try_charge_swapin(struct mm_struct *mm,
>  					  struct mem_cgroup **memcgp)
>  {
>  	struct mem_cgroup *memcg;
> +	struct page_cgroup *pc;
>  	int ret;
>  
> +	pc = lookup_page_cgroup(page);
> +	/*
> +	 * Every swap fault against a single page tries to charge the
> +	 * page, bail as early as possible.  shmem_unuse() encounters
> +	 * already charged pages, too.  The USED bit is protected by
> +	 * the page lock, which serializes swap cache removal, which
> +	 * in turn serializes uncharging.
> +	 */
> +	if (PageCgroupUsed(pc))
> +		return 0;
>  	if (!do_swap_account)
>  		goto charge_cur_mm;
>  	/*
> -- 
> 1.7.7.6
> 

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ