lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 01 Mar 2012 11:20:47 +0800
From:	Li Zefan <lizf@...fujitsu.com>
To:	Frederic Weisbecker <fweisbec@...il.com>
CC:	Mandeep Singh Baines <msb@...omium.org>, Tejun Heo <tj@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Oleg Nesterov <oleg@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC][PATCH v2] cgroups: Run subsystem fork callback from cgroup_post_fork()

于 2012年03月01日 00:21, Frederic Weisbecker 写道:
> On Wed, Feb 29, 2012 at 07:55:00AM -0800, Mandeep Singh Baines wrote:
>> Frederic Weisbecker (fweisbec@...il.com) wrote:
>>> When a user freezes a cgroup, the freezer sets the subsystem state
>>> to CGROUP_FREEZING and then iterates over the tasks in the cgroup links.
>>>
>>> But there is a possible race here, although unlikely, if a task
>>> forks and the parent is preempted between write_unlock(tasklist_lock)
>>> and cgroup_post_fork(). If we freeze the cgroup while the parent
>>
>> So what if you moved cgroup_post_forks() a few lines up to be
>> inside the tasklist_lock?
> 
> It won't work. Consider this scenario:
> 
> CPU 0                                     CPU 1
> 
>                                        cgroup_fork_callbacks()
>                                        write_lock(tasklist_lock)
> try_to_freeze_cgroup() {               add child to task list etc...
> 	cgroup_iter_start()
>         freeze tasks                        
>         cgroup_iter_end()
> }                                      cgroup_post_fork()
>                                        write_unlock(tasklist_lock)
> 
> If this is not the first time we call cgroup_iter_start(), we won't go
> through the whole tasklist, we simply iterate through the css set task links.
> 
> Plus we try to avoid anything under tasklist_lock when possible.
> 

Your patch won't close the race I'm afraid.

// state will be set to FREEZING
echo FROZEN > /cgroup/sub/freezer.state
                                          write_lock(tasklist_lock)
                                          add child to task list ...
					  write_unlock(tasklist_lock)
// state will be updated to FROZEN
cat /cgroup/sub/freezer.state
					  cgroup_post_fork()
					  ->freezer_fork()

freezer_fork() will freeze the task only if the cgroup is in FREEZING
state, and will BUG if the state is FROZEN.

We can fix freezer_fork(), but seems that requires we hold cgroup_mutex
in that function(), which we don't like at all. Not to say your
task_counter stuff..

At this moment I don't see a solution without tasklist_lock involved,
any better idea?

(I just realized be patch below introduces a tasklist_lock <-> freezer->lock
ABBA deadlock, so it's bad to screw up with tasklist lock)

diff --git a/kernel/cgroup_freezer.c b/kernel/cgroup_freezer.c
index fc0646b..74527ac 100644
--- a/kernel/cgroup_freezer.c
+++ b/kernel/cgroup_freezer.c
@@ -278,6 +278,12 @@ static int try_to_freeze_cgroup(struct cgroup *cgroup, struct freezer *freezer)
 	struct task_struct *task;
 	unsigned int num_cant_freeze_now = 0;
 
+	/*
+	 * With this lock held and the check in freezer_fork(), a
+	 * half-forked task has no chance to escape from freezing.
+	 */
+	read_lock(&tasklist_lock);
+
 	cgroup_iter_start(cgroup, &it);
 	while ((task = cgroup_iter_next(cgroup, &it))) {
 		if (!freeze_task(task))
@@ -289,6 +295,8 @@ static int try_to_freeze_cgroup(struct cgroup *cgroup, struct freezer *freezer)
 	}
 	cgroup_iter_end(cgroup, &it);
 
+	read_unlock(&tasklist_lock);
+
 	return num_cant_freeze_now ? -EBUSY : 0;
 }
 
diff --git a/kernel/fork.c b/kernel/fork.c
index e2cd3e2..2450720 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1328,15 +1328,15 @@ static struct task_struct *copy_process(unsigned long clone_flags,
 	p->group_leader = p;
 	INIT_LIST_HEAD(&p->thread_group);
 
+	/* Need tasklist lock for parent etc handling! */
+	write_lock_irq(&tasklist_lock);
+
 	/* Now that the task is set up, run cgroup callbacks if
 	 * necessary. We need to run them before the task is visible
 	 * on the tasklist. */
 	cgroup_fork_callbacks(p);
 	cgroup_callbacks_done = 1;
 
-	/* Need tasklist lock for parent etc handling! */
-	write_lock_irq(&tasklist_lock);
-
 	/* CLONE_PARENT re-uses the old parent */
 	if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) {
 		p->real_parent = current->real_parent;
@@ -1393,9 +1393,9 @@ static struct task_struct *copy_process(unsigned long clone_flags,
 
 	total_forks++;
 	spin_unlock(&current->sighand->siglock);
+	cgroup_post_fork(p);
 	write_unlock_irq(&tasklist_lock);
 	proc_fork_connector(p);
-	cgroup_post_fork(p);
 	if (clone_flags & CLONE_THREAD)
 		threadgroup_change_end(current);
 	perf_event_fork(p);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ