[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <BLU436-SMTP2070F91D37071E830F9EA20806E0@phx.gbl>
Date: Fri, 28 Aug 2015 14:43:28 +0800
From: Wanpeng Li <wanpeng.li@...mail.com>
To: Boqun Feng <boqun.feng@...il.com>
CC: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Sasha Levin <sasha.levin@...cle.com>,
kernel test robot <ying.huang@...el.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched: fix tsk->pi_lock isn't held when
do_set_cpus_allowed()
Hi Boqun,
On 8/28/15 2:33 PM, Boqun Feng wrote:
> Hi Wanpeng,
>
> On Fri, Aug 28, 2015 at 12:02:47PM +0800, Wanpeng Li wrote:
> <snip>
>> This patch fix it by following the rules for changing task_struct::cpus_allowed
>> w/ both pi_lock and rq->lock are held.
>>
>> Reported-by: kernel test robot <ying.huang@...el.com>
>> Reported-by: Sasha Levin <sasha.levin@...cle.com>
>> Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
>> ---
>> v1 -> v2:
>> * fix the silly double lock stuff
>> * follow the rules for changing task_struct::cpus_allowed
>>
>> kernel/sched/core.c | 22 ++++++++++++++++++++++
>> 1 files changed, 22 insertions(+), 0 deletions(-)
>>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index b3386c6..8cf87e3 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -5186,6 +5186,27 @@ static void migrate_tasks(struct rq *dead_rq)
>> BUG_ON(!next);
>> next->sched_class->put_prev_task(rq, next);
>>
>> + /*
>> + * Rules for changing task_struct::cpus_allowed are holding
>> + * both pi_lock and rq->lock, such that holding either
>> + * stabilizes the mask.
>> + *
>> + * Drop rq->lock is not quite as disastrous as it usually is
>> + * because !cpu_active at this point, which means load-balance
>> + * will not interfere.
>> + */
>> + lockdep_unpin_lock(&rq->lock);
>> + raw_spin_unlock(&rq->lock);
>> + raw_spin_lock(&next->pi_lock);
>> + raw_spin_lock(&rq->lock);
>> + lockdep_pin_lock(&rq->lock);
>> + if (!(task_rq(next) == rq && task_on_rq_queued(next))) {
>> + lockdep_unpin_lock(&rq->lock);
>> + raw_spin_unlock(&rq->lock);
> Dropping rq->lock here means we will continue the loop without the
> rq->lock, right? But we do have a lockdep_pin_lock(&rq->lock) in the
> beginning of every iteration. So can we release rq->lock here?
Good catch! There is no need to lockdep_unpin and unlock rq->lock I think.
Regards,
Wanpeng Li
>
> Regards,
> Boqun
>
>> + raw_spin_unlock(&next->pi_lock);
>> + continue;
>> + }
>> +
>> /* Find suitable destination for @next, with force if needed. */
>> dest_cpu = select_fallback_rq(dead_rq->cpu, next);
>>
>> @@ -5196,6 +5217,7 @@ static void migrate_tasks(struct rq *dead_rq)
>> rq = dead_rq;
>> raw_spin_lock(&rq->lock);
>> }
>> + raw_spin_unlock(&next->pi_lock);
>> }
>>
>> rq->stop = stop;
>> --
>> 1.7.1
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@...r.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists