lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 30 Jul 2018 11:31:13 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Vladimir Davydov <vdavydov.dev@...il.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...nel.org>,
        Kirill Tkhai <ktkhai@...tuozzo.com>, cgroups@...r.kernel.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] memcg: Remove memcg_cgroup::id from IDR on
 mem_cgroup_css_alloc() failure

On Sun, Jul 29, 2018 at 10:26:21PM +0300, Vladimir Davydov wrote:
> On Fri, Jul 27, 2018 at 03:31:34PM -0400, Johannes Weiner wrote:
> > That said, the lifetime of the root reference on the ID is the online
> > state, we put that in css_offline. Is there a reason we need to have
> > the ID ready and the memcg in the IDR before onlining it?
> 
> I fail to see any reason for this in the code.

Me neither, thanks for double checking.

The patch also survives stress testing cgroup creation and destruction
with the script from 73f576c04b94 ("mm: memcontrol: fix cgroup
creation failure after many small jobs").

> > Can we do something like this and not mess with the alloc/free
> > sequence at all?
> 
> I guess so, and this definitely looks better to me.

Cool, then I think we should merge Kirill's patch as the fix and mine
as a follow-up cleanup.

---

>From b4106ea1f163479da805eceada60c942bd66e524 Mon Sep 17 00:00:00 2001
From: Johannes Weiner <hannes@...xchg.org>
Date: Mon, 30 Jul 2018 11:03:55 -0400
Subject: [PATCH] mm: memcontrol: simplify memcg idr allocation and error
 unwinding

The memcg ID is allocated early in the multi-step memcg creation
process, which needs 2-step ID allocation and IDR publishing, as well
as two separate IDR cleanup/unwind sites on error.

Defer the IDR allocation until the last second during onlining to
eliminate all this complexity. There is no requirement to have the ID
and IDR entry earlier than that. And the root reference to the ID is
put in the offline path, so this matches nicely.

Signed-off-by: Johannes Weiner <hannes@...xchg.org>
---
 mm/memcontrol.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 12205159b462..12339ae779ca 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4151,12 +4151,6 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
 	if (!memcg)
 		return NULL;
 
-	memcg->id.id = idr_alloc(&mem_cgroup_idr, NULL,
-				 1, MEM_CGROUP_ID_MAX,
-				 GFP_KERNEL);
-	if (memcg->id.id < 0)
-		goto fail;
-
 	memcg->stat_cpu = alloc_percpu(struct mem_cgroup_stat_cpu);
 	if (!memcg->stat_cpu)
 		goto fail;
@@ -4183,10 +4177,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
 #ifdef CONFIG_CGROUP_WRITEBACK
 	INIT_LIST_HEAD(&memcg->cgwb_list);
 #endif
-	idr_replace(&mem_cgroup_idr, memcg, memcg->id.id);
 	return memcg;
 fail:
-	mem_cgroup_id_remove(memcg);
 	__mem_cgroup_free(memcg);
 	return NULL;
 }
@@ -4245,7 +4237,6 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
 
 	return &memcg->css;
 fail:
-	mem_cgroup_id_remove(memcg);
 	mem_cgroup_free(memcg);
 	return ERR_PTR(-ENOMEM);
 }
@@ -4253,10 +4244,17 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
 static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 {
 	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+	int i;
+
+	i = idr_alloc(&mem_cgroup_idr, memcg, 1, MEM_CGROUP_ID_MAX, GFP_KERNEL);
+	if (i < 0)
+		return i;
 
 	/* Online state pins memcg ID, memcg ID pins CSS */
+	memcg->id.id = i;
 	atomic_set(&memcg->id.ref, 1);
 	css_get(css);
+
 	return 0;
 }
 
-- 
2.18.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ