lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 05 Jan 2012 18:55:32 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Chanho Min <chanho0207@...il.com>
Cc:	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	chanho.min@....com, rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH] sched_rt: the task in irq context can be migrated
 during context switching

On Thu, 2012-01-05 at 20:00 +0900, Chanho Min wrote:
> This issue happens under the following conditions
> 1. preemption is off
> 2. __ARCH_WANT_INTERRUPTS_ON_CTXSW is defined.
> 3. RT scheduling class
> 4. SMP system
> 
> Sequence is as follows:
> 1.suppose current task is A. start schedule()
> 2.task A is enqueued pushable task at the entry of schedule()
>    __schedule
>     prev = rq->curr;
>     ...
>     put_prev_task
>      put_prev_task_rt
>       enqueue_pushable_task
> 4.pick the task B as next task.
>    next = pick_next_task(rq);
> 3.rq->curr set to task B and context_switch is started.
>    rq->curr = next;
> 4.At the entry of context_swtich, release this cpu's rq->lock.
>    context_switch
>     prepare_task_switch
>      prepare_lock_switch
>       raw_spin_unlock_irq(&rq->lock);
> 5.Shortly after rq->lock is released, interrupt is occurred and start
> IRQ context
> 6.try_to_wake_up() which called by ISR acquires rq->lock
>     try_to_wake_up
>      ttwu_remote
>       rq = __task_rq_lock(p)
>       ttwu_do_wakeup(rq, p, wake_flags);
>         task_woken_rt
> 7.push_rt_task picks the task A which is enqueued before.
>    task_woken_rt
>     push_rt_tasks(rq)
>      next_task = pick_next_pushable_task(rq)
> 8.At find_lock_lowest_rq(), If double_lock_balance() returns 0,
> lowest_rq can be the remote rq.
>   (But,If preemption is on, double_lock_balance always return 1 and it
> does't happen.)
>    push_rt_task
>     find_lock_lowest_rq
>      if (double_lock_balance(rq, lowest_rq))..
> 9.find_lock_lowest_rq return the available rq. task A is migrated to
> the remote cpu/rq.
>    push_rt_task
>     ...
>     deactivate_task(rq, next_task, 0);
>     set_task_cpu(next_task, lowest_rq->cpu);
>     activate_task(lowest_rq, next_task, 0);
> 10. But, task A is on irq context at this cpu.
>     So, task A is scheduled by two cpus at the same time until restore from IRQ.
>     Task A's stack is corrupted. Unexpected problem is occurred.
> 
> In recent ARM, I saw the patch to remove
> __ARCH_WANT_INTERRUPTS_ON_CTXSW is posted.
> But, if that feature is adopted by the others architecture or to
> remain in the release version,,this can occur.
> This is my patch to fix it. Any opinions will be appreciated.

So the problem is quite real, as already said we don't need to worry
about the future, but we might want to fix this in previous kernels.
What I'm not entirely sure of is the proposed solution, Steven don't we
get in trouble by simply bailing out on the push?

> Signed-off-by: Chanho Min <chanho.min@....com>
> ---
>  kernel/sched_rt.c |    5 +++++
>  1 files changed, 5 insertions(+), 0 deletions(-)
> 
> diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
> index 583a136..59e66e3 100644
> --- a/kernel/sched_rt.c
> +++ b/kernel/sched_rt.c
> @@ -1388,6 +1388,11 @@ static int push_rt_task(struct rq *rq)
>         if (!next_task)
>                 return 0;
> 
> +#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
> +       if (unlikely(task_running(rq, next_task)))
> +               return 0;
> +#endif
> +
>  retry:
>         if (unlikely(next_task == rq->curr)) {
>                 WARN_ON(1);
> --

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ