lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54505D10.7050809@yandex.ru>
Date:	Wed, 29 Oct 2014 06:20:48 +0300
From:	Kirill Tkhai <tkhai@...dex.ru>
To:	Oleg Nesterov <oleg@...hat.com>,
	Kirill Tkhai <ktkhai@...allels.com>
CC:	linux-kernel@...r.kernel.org,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	Burke Libbey <burke.libbey@...pify.com>,
	Vladimir Davydov <vdavydov@...allels.com>
Subject: Re: [PATCH] sched: Fix race between task_group and sched_task_group

On 29.10.2014 01:52, Oleg Nesterov wrote:
> On 10/28, Kirill Tkhai wrote:
>>
>> Shouldn't we do that in separate patch? How about this?
> 
> Up to Peter, but I think a separate patch is fine.
> 
>> [PATCH]sched: Remove lockdep check in sched_move_task()
>>
>> sched_move_task() is the only interface to change sched_task_group:
>> cpu_cgrp_subsys methods and autogroup_move_group() use it.
> 
> Yes, but...
> 
>> Everything is synchronized by task_rq_lock(), so cpu_cgroup_attach()
>> is ordered with other users of sched_move_task(). This means we do
>> no need RCU here: if we've dereferenced a tg here, the .attach method
>> hasn't been called for it yet.
>>
>> Thus, we should pass "true" to task_css_check() to silence lockdep
>> warnings.
> 
> In theory, I am not sure.
> 
> However, I never really understood this code and today I forgot everything,
> please correct me.
> 
>> @@ -7403,8 +7403,12 @@ void sched_move_task(struct task_struct *tsk)
>>  	if (unlikely(running))
>>  		put_prev_task(rq, tsk);
>>
>> -	tg = container_of(task_css_check(tsk, cpu_cgrp_id,
>> -				lockdep_is_held(&tsk->sighand->siglock)),
>> +	/*
>> +	 * All callers are synchronized by task_rq_lock(); we do not use RCU
>> +	 * which is pointless here. Thus, we pass "true" to task_css_check()
>> +	 * to prevent lockdep warnings.
>> +	 */
>> +	tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
>>  			  struct task_group, css);
> 
> Why this can't race with cgroup_task_migrate() if it is called by
> cgroup_post_fork() ?

It can race, but which problem is there? The only thing is
cgroup_post_fork()'s or ss->attach()'s call of sched_move_task() will be
NOOP.

cgroup_migrate_add_src()

  cgroup_task_migrate()
                                                    cgroup_post_fork();
    rcu_assign_pointer(tsk->cgroups, new_cset);
                                                      sched_move_task();
  css->ss->attach(css, &tset);

    sched_move_task();

cgroup_migrate_finish()

> And cgroup_task_migrate() can free ->cgroups via call_rcu(). Of course,
> in practice raw_spin_lock_irq() should also act as rcu_read_lock(), but
> we should not rely on implementation details.

Do you mean cgroup_task_migrate()->put_css_set_locked()? It's not
possible there, because old_cset->refcount is lager than 1. We increment
it in cgroup_migrate_add_src() and real freeing happens in
cgroup_migrate_finish(). These functions are around task_migrate(), they
are pair brackets.

> task_group = tsk->cgroups[cpu_cgrp_id] can't go away because yes, if we
> race with migrate then ->attach() was not called. But it seems that in
> theory it is not safe to dereference tsk->cgroups.

old_cset can't be freed in cgroup_task_migrate(), so we can safely
dereference it. If we've got old_cset in
cgroup_post_fork()->sched_move_task(), the right sched_task_group will
be installed by attach->sched_move_task().

Kirill
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ