lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150305014521.GA3112@kernel>
Date:	Thu, 5 Mar 2015 09:45:21 +0800
From:	Wanpeng Li <wanpeng.li@...ux.intel.com>
To:	Juri Lelli <juri.lelli@....com>
Cc:	Wanpeng Li <wanpeng.li@...ux.intel.com>,
	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v9] sched/deadline: support dl task migration during cpu
 hotplug

Hi Juri,
On Wed, Mar 04, 2015 at 09:53:19AM +0000, Juri Lelli wrote:
>Hi,
>
>I think we are still missing a corner case: no admission control, a task
>with an affinity mask of a single cpu, the cpu goes off. In this case we
>could try to let it run just somewhere else, as we don't guarantee anything
>from start. This applies on top of your patch, comments?
>
>Thanks,
>
>- Juri
>
>---
> kernel/sched/deadline.c | 20 +++++++++++++++++---
> 1 file changed, 17 insertions(+), 3 deletions(-)
>
>diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>index 467ec5d..8dec157 100644
>--- a/kernel/sched/deadline.c
>+++ b/kernel/sched/deadline.c
>@@ -579,10 +579,24 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
> 			 * online cpu.
> 			 */
> 			fallback = true;
>-			cpu = cpumask_any_and(cpu_active_mask, tsk_cpus_allowed(p));
>+			cpu = cpumask_any_and(cpu_active_mask,
>+					      tsk_cpus_allowed(p));

Indeed, otherwise "WARNING: line over 80 characters", I will change it back 
in next version.

> 			if (cpu >= nr_cpu_ids) {
>-				WARN_ON(1);
>-				goto unlock;
>+				if (dl_bandwidth_enabled()) {
>+					/*
>+					 * Fail to find any suitable cpu.
>+					 * The task will never come back!
>+					 */
>+					WARN_ON(1);
>+					goto unlock;
>+				} else {
>+					/*
>+					 * If admission control is disabled we
>+					 * try a little harder to let the task
>+					 * run.
>+					 */
>+					cpu = cpumask_any(cpu_active_mask);
>+				}

Cool, I will fold this next branch in my patch, your help is a great appreciated. ;-)

Regards,
Wanpeng Li 

> 			}
> 			later_rq = cpu_rq(cpu);
> 			double_lock_balance(rq, later_rq);
>-- 
>2.3.0  
>
>On 02/03/2015 23:35, Wanpeng Li wrote:
>> I observe that dl task can't be migrated to other cpus during cpu hotplug,
>> in addition, task may/may not be running again if cpu is added back. The
>> root cause which I found is that dl task will be throtted and removed from
>> dl rq after comsuming all budget, which leads to stop task can't pick it up
>> from dl rq and migrate to other cpus during hotplug.
>> 
>> The method to reproduce:
>> schedtool -E -t 50000:100000 -e ./test
>> Actually test is just a simple for loop. Then observe which cpu the test
>> task is on.
>> echo 0 > /sys/devices/system/cpu/cpuN/online
>> 
>> This patch adds the dl task migration during cpu hotplug by finding a most
>> suitable later deadline rq after dl timer fire if current rq is offline,
>> if fail to find a suitable later deadline rq then fallback to any eligible
>> online cpu in order that the deadline task will come back to us, and the
>> push/pull mechanism should then move it around properly.
>> 
>> Signed-off-by: Wanpeng Li <wanpeng.li@...ux.intel.com>
>> ---
>> v8 -> v9:
>>  * align tsk_cpus_allowed(p) to cpu_active_mask
>>  * add WARN_ON(1)
>>  * don't resched_curr if later_rq come from the cpumask_any_and()
>> v7 -> v8:
>>  * remove rd->span related modification since Pang's commit 16b269436b72
>>    (sched/deadline: Modify cpudl::free_cpus to reflect rd->online) merged
>>    upstream, which Juri pointed out can handle the exclusive cpusets.
>>  * rebase
>> v6 -> v7:
>>  * rebase
>> v5 -> v6:
>>  * add double_lock_balance in the fallback path
>> v4 -> v5:
>>  * remove raw_spin_unlock(&rq->lock)
>>  * cleanup codes, spotted by Peterz
>>  * cleanup patch description
>> v3 -> v4:
>>  * use tsk_cpus_allowed wrapper
>>  * fix compile error
>> v2 -> v3:
>>  * don't get_task_struct
>>  * if cannot preempt any rq, fallback to pick any online cpus
>>  * use cpu_active_mask as original later_mask if cpu is offline
>> v1 -> v2:
>>  * push the task to another cpu in dl_task_timer() if rq is offline.
>> 
>>  kernel/sched/deadline.c | 40 ++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 40 insertions(+)
>> 
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index 08766a3..d5b1b16 100644
>> --- a/kernel/sched/deadline.c
>> +++ b/kernel/sched/deadline.c
>> @@ -492,6 +492,7 @@ static int start_dl_timer(struct sched_dl_entity *dl_se, bool boosted)
>>  	return hrtimer_active(&dl_se->dl_timer);
>>  }
>>  
>> +static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq);
>>  /*
>>   * This is the bandwidth enforcement timer callback. If here, we know
>>   * a task is not on its dl_rq, since the fact that the timer was running
>> @@ -537,6 +538,45 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
>>  	update_rq_clock(rq);
>>  
>>  	/*
>> +	 * So if we find that the rq the task was on is no longer
>> +	 * available, we need to select a new rq.
>> +	 */
>> +	if (unlikely(!rq->online)) {
>> +		struct rq *later_rq = NULL;
>> +		bool fallback = false;
>> +
>> +		later_rq = find_lock_later_rq(p, rq);
>> +
>> +		if (!later_rq) {
>> +			int cpu;
>> +
>> +			/*
>> +			 * If cannot preempt any rq, fallback to pick any
>> +			 * online cpu.
>> +			 */
>> +			fallback = true;
>> +			cpu = cpumask_any_and(cpu_active_mask, tsk_cpus_allowed(p));
>> +			if (cpu >= nr_cpu_ids) {
>> +				WARN_ON(1);
>> +				goto unlock;
>> +			}
>> +			later_rq = cpu_rq(cpu);
>> +			double_lock_balance(rq, later_rq);
>> +		}
>> +
>> +		deactivate_task(rq, p, 0);
>> +		set_task_cpu(p, later_rq->cpu);
>> +		activate_task(later_rq, p, ENQUEUE_REPLENISH);
>> +
>> +		if (!fallback)
>> +			resched_curr(later_rq);
>> +
>> +		double_unlock_balance(rq, later_rq);
>> +
>> +		goto unlock;
>> +	}
>> +
>> +	/*
>>  	 * If the throttle happened during sched-out; like:
>>  	 *
>>  	 *   schedule()
>> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ