lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b647ffbd0803111638sdc92903r7faad3d4b2b6b842@mail.gmail.com>
Date:	Wed, 12 Mar 2008 00:38:36 +0100
From:	"Dmitry Adamushko" <dmitry.adamushko@...il.com>
To:	"Hiroshi Shimamoto" <h-shimamoto@...jp.nec.com>
Cc:	"Peter Zijlstra" <peterz@...radead.org>,
	"Ingo Molnar" <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	linux-rt-users@...r.kernel.org, hpj@...la.net,
	stable <stable@...nel.org>
Subject: Re: [PATCH] sched: fix race in schedule

On 11/03/2008, Hiroshi Shimamoto <h-shimamoto@...jp.nec.com> wrote:
> Peter Zijlstra wrote:
>  > On Mon, 2008-03-10 at 19:12 -0700, Hiroshi Shimamoto wrote:
>  >> Peter Zijlstra wrote:
>  >>> On Mon, 2008-03-10 at 13:01 -0700, Hiroshi Shimamoto wrote:
>  >>>
>  >>>> thanks, your patch looks nice to me.
>  >>>> I had focused setprio, on_rq=0 and running=1 situation, it makes me to
>  >>>> fix these functions.
>  >>>> But one point, I've just noticed. I'm not sure on same situation against
>  >>>> sched_rt. I think the pre_schedule() of rt has chance to drop rq lock.
>  >>>> Is it OK?
>  >>> Ah, you are quite right, that'll teach me to rush out a patch just
>  >>> because dinner is ready :-).
>  >>>
>  >>> How about we submit the following patch for mainline and CC -stable to
>  >>> fix .23 and .24:
>  >>>
>  >> Unfortunately, I encountered similar panic with this patch on -rt.
>  >> I'll look into this, again. I might have missed something...
>  >>
>  >> Unable to handle kernel NULL pointer dereference at 0000000000000128 RIP:
>  >>  [<ffffffff802297f5>] pick_next_task_fair+0x2d/0x42
>  >
>  > :-(
>  >
>  > OK, so that means I'm not getting it.
>  >
>  > So what does your patch do that mine doesn't?
>  >
>  > It removes the dependency of running (=task_current()) from on_rq
>  > (p->se.on_rq).
>  >
>  > So how can a current task not be on the runqueue?
>  >
>  > Only sched.c:dequeue_task() and sched_fair.c:account_entity_dequeue()
>  > set on_rq to 0, the only one changing rq->curr is schedule().
>  >
>  > So the only scheme I can come up with is that we did dequeue p (on_rq ==
>  > 0), but we didn't yet schedule so rq->curr == p.
>  >
>  > Is this how you ended up with your previuos analysis that it must be due
>  > to a hole introduced by double_lock_balance()?
>  >
>  > Because now we can seemingly call deactivate_task() and put_prev_task()
>  > in non-atomic fashion, but by placing the put_prev_task() before the
>  > load balance calls we should avoid doing that.
>  >
>  > So what else is going on... /me puzzled
>
>
> thanks for taking time.
>  I've tested it with debug tracer, it took several hours to half day to
>  reproduce it on my box. And I got the possible scenario.
>
>  Before begin, I can tell that se->on_rq is changed at enqueue_task() or
>  dequeue_task() in sched.c.
>
>  Here is the flow to panic which I got;
>
>    CPU0                               CPU1
>     |                              schedule()
>     |                              ->deactivate_task()
>
>     |                              -->dequeue_task()
>     |                                  * on_rq=0
>     |                              ->put_prev_task_fair()
>
>     |                              ->idle_balance()
>     |                              -->load_balance_newidle()
>
> (a wakeup function)                    |
>
>     |                              --->double_lock_balance()
>     *get lock                          *rel lock
>
>     * wake up target is CPU1's curr    |
>  ->enqueue_task()                       |
>     * on_rq=1                          |
>  ->rt_mutex_setprio()                   |
>     * on_rq=1, ruuning=1               |
>  -->dequeue_task()!!                    |
>  -->put_prev_task_fair()!!              |

humm... this one should have caused the problem.

->put_prev_task() has been previously done in schedule() so we get 2
consequent ->put_prev_task() without set_curr_task/pick_next_task()
being called in between
[ as a result, __enqueue_entitty() is called twice for CPU1's curr and
that definitely corrupts an rb-tree ]

your initial patch doesn't have this problem. humm... logically-wise,
it looks like a change of the 'current' which can be expressed by a
pair :

(1) put_prev_task() + (2) pick_next_task() or set_curr_task()
(both end up calling set_next_entity())

has to be 'atomic' wrt the rq->lock.

For schedule() that also involves a change of rq->curr.

Your initial patch seems to qualify for this rule... but I'm still
thinking whether the current scheme is a bit 'hairy' :-/


>
> Hiroshi Shimamoto
>

-- 
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ