lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 10 Mar 2008 13:01:41 -0700
From:	Hiroshi Shimamoto <h-shimamoto@...jp.nec.com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	linux-rt-users@...r.kernel.org, hpj@...la.net
Subject: Re: [PATCH] sched: fix race in schedule

Peter Zijlstra wrote:
> On Mon, 2008-03-10 at 11:01 -0700, Hiroshi Shimamoto wrote:
>> Hi Ingo,
>>
>> I found a race condition in scheduler.
>> The first report is the below;
>> http://lkml.org/lkml/2008/2/26/459
>>
>> It took a bit long time to investigate and I couldn't have much time last week.
>> It is hard to reproduce but -rt is little easier because it has preemptible
>> spin lock and rcu.
>>
>> Could you please check the scenario and the patch.
>> It will be needed for the stable, too.
>>
>> ---
>> From: Hiroshi Shimamoto <h-shimamoto@...jp.nec.com>
>>
>> There is a race condition between schedule() and some dequeue/enqueue
>> functions; rt_mutex_setprio(), __setscheduler() and sched_move_task().
>>
>> When scheduling to idle, idle_balance() is called to pull tasks from
>> other busy processor. It might drop the rq lock.
>> It means that those 3 functions encounter on_rq=0 and running=1.
>> The current task should be put when running.
>>
>> Here is a possible scenario;
>>    CPU0                               CPU1
>>     |                              schedule()
>>     |                              ->deactivate_task()
>>     |                              ->idle_balance()
>>     |                              -->load_balance_newidle()
>> rt_mutex_setprio()                     |
>>     |                              --->double_lock_balance()
>>     *get lock                          *rel lock
>>     * on_rq=0, ruuning=1               |
>>     * sched_class is changed           |
>>     *rel lock                          *get lock
>>     :                                  |
>>                                        :
>>                                    ->put_prev_task_rt()
>>                                    ->pick_next_task_fair()
>>                                        => panic
>>
>> The current process of CPU1(P1) is scheduling. Deactivated P1,
>> and the scheduler looks for another process on other CPU's runqueue
>> because CPU1 will be idle. idle_balance(), load_balance_newidle()
>> and double_lock_balance() are called and double_lock_balance() could
>> drop the rq lock. On the other hand, CPU0 is trying to boost the
>> priority of P1. The result of boosting only P1's prio and sched_class
>> are changed to RT. The sched entities of P1 and P1's group are never
>> put. It makes cfs_rq invalid, because the cfs_rq has curr and no leaf,
>> but pick_next_task_fair() is called, then the kernel panics.
> 
> Very nice catch, this had me puzzled for a while. I'm not quite sure I
> fully understand. Could you explain why the below isn't sufficient?

thanks, your patch looks nice to me.
I had focused setprio, on_rq=0 and running=1 situation, it makes me to
fix these functions.
But one point, I've just noticed. I'm not sure on same situation against
sched_rt. I think the pre_schedule() of rt has chance to drop rq lock.
Is it OK?

> 
> ---
> diff --git a/kernel/sched.c b/kernel/sched.c
> index a0c79e9..ebd9fc5 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -4067,10 +4067,11 @@ need_resched_nonpreemptible:
>  		prev->sched_class->pre_schedule(rq, prev);
>  #endif
>  
> +	prev->sched_class->put_prev_task(rq, prev);
> +
>  	if (unlikely(!rq->nr_running))
>  		idle_balance(cpu, rq);
>  
> -	prev->sched_class->put_prev_task(rq, prev);
>  	next = pick_next_task(rq, prev);
>  
>  	sched_info_switch(prev, next);
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ