lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Jun 2014 23:26:41 +0400
From:	Kirill Tkhai <tkhai@...dex.ru>
To:	bsegall@...gle.com
CC:	Kirill Tkhai <ktkhai@...allels.com>, linux-kernel@...r.kernel.org,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	khorenko@...allels.com, Paul Turner <pjt@...gle.com>
Subject: Re: [PATCH v2 1/3] sched/fair: Disable runtime_enabled on dying rq

On 24.06.2014 23:13, bsegall@...gle.com wrote:
> Kirill Tkhai <tkhai@...dex.ru> writes:
> 
>> On 24.06.2014 21:03, bsegall@...gle.com wrote:
>>> Kirill Tkhai <ktkhai@...allels.com> writes:
>>>
>>>> We kill rq->rd on the CPU_DOWN_PREPARE stage:
>>>>
>>>> 	cpuset_cpu_inactive -> cpuset_update_active_cpus -> partition_sched_domains ->
>>>> 	-> cpu_attach_domain -> rq_attach_root -> set_rq_offline
>>>>
>>>> This unthrottles all throttled cfs_rqs.
>>>>
>>>> But the cpu is still able to call schedule() till
>>>>
>>>> 	take_cpu_down->__cpu_disable()
>>>>
>>>> is called from stop_machine.
>>>>
>>>> This case the tasks from just unthrottled cfs_rqs are pickable
>>>> in a standard scheduler way, and they are picked by dying cpu.
>>>> The cfs_rqs becomes throttled again, and migrate_tasks()
>>>> in migration_call skips their tasks (one more unthrottle
>>>> in migrate_tasks()->CPU_DYING does not happen, because rq->rd
>>>> is already NULL).
>>>>
>>>> Patch sets runtime_enabled to zero. This guarantees, the runtime
>>>> is not accounted, and the cfs_rqs won't exceed given
>>>> cfs_rq->runtime_remaining = 1, and tasks will be pickable
>>>> in migrate_tasks(). runtime_enabled is recalculated again
>>>> when rq becomes online again.
>>>>
>>>> Ben Segall also noticed, we always enable runtime in
>>>> tg_set_cfs_bandwidth(). Actually, we should do that for online
>>>> cpus only. To fix that, we check if a cpu is online when
>>>> its rq is locked. This guarantees we do not have races with
>>>> set_rq_offline(), which also requires rq->lock.
>>>>
>>>> v2: Fix race with tg_set_cfs_bandwidth().
>>>>     Move cfs_rq->runtime_enabled=0 above unthrottle_cfs_rq().
>>>>
>>>> Signed-off-by: Kirill Tkhai <ktkhai@...allels.com>
>>>> CC: Konstantin Khorenko <khorenko@...allels.com>
>>>> CC: Ben Segall <bsegall@...gle.com>
>>>> CC: Paul Turner <pjt@...gle.com>
>>>> CC: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
>>>> CC: Mike Galbraith <umgwanakikbuti@...il.com>
>>>> CC: Peter Zijlstra <peterz@...radead.org>
>>>> CC: Ingo Molnar <mingo@...nel.org>
>>>> ---
>>>>  kernel/sched/core.c |   15 +++++++++++----
>>>>  kernel/sched/fair.c |   22 ++++++++++++++++++++++
>>>>  2 files changed, 33 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>>> index 7f3063c..707a3c5 100644
>>>> --- a/kernel/sched/core.c
>>>> +++ b/kernel/sched/core.c
>>>> @@ -7842,11 +7842,18 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota)
>>>>  		struct rq *rq = cfs_rq->rq;
>>>>  
>>>>  		raw_spin_lock_irq(&rq->lock);
>>>> -		cfs_rq->runtime_enabled = runtime_enabled;
>>>> -		cfs_rq->runtime_remaining = 0;
>>>> +		/*
>>>> +		 * Do not enable runtime on offline runqueues. We specially
>>>> +		 * make it disabled in unthrottle_offline_cfs_rqs().
>>>> +		 */
>>>> +		if (cpu_online(i)) {
>>>> +			cfs_rq->runtime_enabled = runtime_enabled;
>>>> +			cfs_rq->runtime_remaining = 0;
>>>> +
>>>> +			if (cfs_rq->throttled)
>>>> +				unthrottle_cfs_rq(cfs_rq);
>>>> +		}
>>>
>>> We can just do for_each_online_cpu, yes? Also we probably need
>>> get_online_cpus/put_online_cpus, and/or want cpu_active_mask instead
>>> right?
>>>
>>
>> Yes, we can use for_each_online_cpu/for_each_active_cpu with
>> get_online_cpus() taken. But it adds one more lock dependence.
>> This looks worse for me.
> 
> I mean, you need get_online_cpus anyway - cpu_online is just a test
> against the same mask that for_each_online_cpu uses, and without taking
> the lock you can still race with offlining and reset runtime_enabled.
> 

Oh, I see. Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ