lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50937A62.1060105@parallels.com>
Date:	Fri, 2 Nov 2012 11:46:42 +0400
From:	Glauber Costa <glommer@...allels.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
	<kamezawa.hiroyu@...fujitsu.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Tejun Heo <tj@...nel.org>, Michal Hocko <mhocko@...e.cz>,
	Christoph Lameter <cl@...ux.com>,
	Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Suleiman Souhlal <suleiman@...gle.com>
Subject: Re: [PATCH v6 23/29] memcg: destroy memcg caches

On 11/02/2012 04:05 AM, Andrew Morton wrote:
> On Thu,  1 Nov 2012 16:07:39 +0400
> Glauber Costa <glommer@...allels.com> wrote:
> 
>> This patch implements destruction of memcg caches. Right now,
>> only caches where our reference counter is the last remaining are
>> deleted. If there are any other reference counters around, we just
>> leave the caches lying around until they go away.
>>
>> When that happen, a destruction function is called from the cache
>> code. Caches are only destroyed in process context, so we queue them
>> up for later processing in the general case.
>>
>>
>> ...
>>
>> @@ -5950,6 +6012,7 @@ static int mem_cgroup_pre_destroy(struct cgroup *cont)
>>  {
>>  	struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
>>  
>> +	mem_cgroup_destroy_all_caches(memcg);
>>  	return mem_cgroup_force_empty(memcg, false);
>>  }
>>  
> 
> Conflicts with linux-next cgroup changes.  Looks pretty simple:
> 
> 
> static int mem_cgroup_pre_destroy(struct cgroup *cont)
> {
> 	struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
> 	int ret;
> 
> 	css_get(&memcg->css);
> 	ret = mem_cgroup_reparent_charges(memcg);
> 	mem_cgroup_destroy_all_caches(memcg);
> 	css_put(&memcg->css);
> 
> 	return ret;
> }
> 

There is one significant difference between the code I had and the code
after your fix up.

In my patch, caches were destroyed before the call to
mem_cgroup_force_empty. In the final, version, they are destroyed after it.

I am here thinking, but I am not sure if this have any significant
impact... If we run mem_cgroup_destroy_all_caches() before reparenting,
we'll have shrunk a lot of the pending caches, and we will have less
pages to reparent. But we only reparent pages in the lru anyway, and
then expect kmem and remaining umem to match. So *in theory* it should
be fine.

Where can I grab your final tree so I can test it and make sure it is
all good ?


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ