[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1271239788.32749.15.camel@laptop>
Date: Wed, 14 Apr 2010 12:09:48 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Raistlin <raistlin@...ux.it>
Cc: Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
Chris Friesen <cfriesen@...tel.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Darren Hart <darren@...art.com>,
Henrik Austad <henrik@...tad.us>,
Johan Eker <johan.eker@...csson.com>,
"p.faure" <p.faure@...tech.ch>,
linux-kernel <linux-kernel@...r.kernel.org>,
Claudio Scordino <claudio@...dence.eu.com>,
michael trimarchi <trimarchi@...is.sssup.it>,
Fabio Checconi <fabio@...dalf.sssup.it>,
Tommaso Cucinotta <t.cucinotta@...up.it>,
Juri Lelli <juri.lelli@...il.com>,
Nicola Manica <nicola.manica@...il.com>,
Luca Abeni <luca.abeni@...tn.it>
Subject: Re: [RFC][PATCH 10/11] sched: add bandwidth management for
sched_dl.
On Sun, 2010-02-28 at 20:27 +0100, Raistlin wrote:
> @@ -2063,6 +2210,30 @@ task_hot(struct task_struct *p, u64 now, struct sched_domain *sd)
> return delta < (s64)sysctl_sched_migration_cost;
> }
>
> +/*
> + * When dealing with a -deadline task, we have to check if moving it to
> + * a new CPU is possible or not. In fact, this is only true iff there
> + * is enough bandwidth available on such CPU, otherwise we want the
> + * whole migration progedure to fail over.
> + */
> +static inline
> +bool __set_task_cpu_dl(struct task_struct *p, unsigned int cpu)
> +{
> + struct dl_bandwidth *dl_b = task_dl_bandwidth(p);
> +
> + raw_spin_lock(&dl_b->dl_runtime_lock);
> + if (dl_b->dl_bw < dl_b->dl_total_bw[cpu] + p->dl.dl_bw) {
> + raw_spin_unlock(&dl_b->dl_runtime_lock);
> +
> + return 0;
> + }
> + dl_b->dl_total_bw[task_cpu(p)] -= p->dl.dl_bw;
> + dl_b->dl_total_bw[cpu] += p->dl.dl_bw;
> + raw_spin_unlock(&dl_b->dl_runtime_lock);
> +
> + return 1;
> +}
> +
> void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
> {
> #ifdef CONFIG_SCHED_DEBUG
> @@ -2077,6 +2248,9 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
> trace_sched_migrate_task(p, new_cpu);
>
> if (task_cpu(p) != new_cpu) {
> + if (task_has_dl_policy(p) && !__set_task_cpu_dl(p, new_cpu))
> + return;
> +
> p->se.nr_migrations++;
> perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 1, NULL, 0);
> }
Yikes!!, I'm not sure we can sanely deal with set_task_cpu() doing that.
I'd much rather see us never attempting set_task_cpu() when we know its
not going to be possible.
That also means that things like set_cpus_allowed_ptr() /
sys_sched_setaffinity() will need to propagate the error back to their
users, which in turn will need to be able to cope.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists