lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160614130707.GJ5981@e106622-lin>
Date:	Tue, 14 Jun 2016 14:07:07 +0100
From:	Juri Lelli <juri.lelli@....com>
To:	xlpang@...hat.com
Cc:	Peter Zijlstra <peterz@...radead.org>, mingo@...nel.org,
	tglx@...utronix.de, rostedt@...dmis.org,
	linux-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
	jdesfossez@...icios.com, bristot@...hat.com,
	Ingo Molnar <mingo@...hat.com>
Subject: Re: [RFC][PATCH 2/8] sched/rtmutex/deadline: Fix a PI crash for
 deadline tasks

On 14/06/16 20:53, Xunlei Pang wrote:
> On 2016/06/14 at 18:21, Juri Lelli wrote:
> > Hi,
> >
> > On 07/06/16 21:56, Peter Zijlstra wrote:
> >> From: Xunlei Pang <xlpang@...hat.com>
> >>
> >> A crash happened while I was playing with deadline PI rtmutex.
> >>
> >>     BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
> >>     IP: [<ffffffff810eeb8f>] rt_mutex_get_top_task+0x1f/0x30
> >>     PGD 232a75067 PUD 230947067 PMD 0
> >>     Oops: 0000 [#1] SMP
> >>     CPU: 1 PID: 10994 Comm: a.out Not tainted
> >>
> >>     Call Trace:
> >>     [<ffffffff810b658c>] enqueue_task+0x2c/0x80
> >>     [<ffffffff810ba763>] activate_task+0x23/0x30
> >>     [<ffffffff810d0ab5>] pull_dl_task+0x1d5/0x260
> >>     [<ffffffff810d0be6>] pre_schedule_dl+0x16/0x20
> >>     [<ffffffff8164e783>] __schedule+0xd3/0x900
> >>     [<ffffffff8164efd9>] schedule+0x29/0x70
> >>     [<ffffffff8165035b>] __rt_mutex_slowlock+0x4b/0xc0
> >>     [<ffffffff81650501>] rt_mutex_slowlock+0xd1/0x190
> >>     [<ffffffff810eeb33>] rt_mutex_timed_lock+0x53/0x60
> >>     [<ffffffff810ecbfc>] futex_lock_pi.isra.18+0x28c/0x390
> >>     [<ffffffff810ed8b0>] do_futex+0x190/0x5b0
> >>     [<ffffffff810edd50>] SyS_futex+0x80/0x180
> >>
> > This seems to be caused by the race condition you detail below between
> > load balancing and PI code. I tried to reproduce the BUG on my box, but
> > it looks hard to get. Do you have a reproducer I can give a try?
> >
> >> This is because rt_mutex_enqueue_pi() and rt_mutex_dequeue_pi()
> >> are only protected by pi_lock when operating pi waiters, while
> >> rt_mutex_get_top_task(), will access them with rq lock held but
> >> not holding pi_lock.
> >>
> >> In order to tackle it, we introduce new "pi_top_task" pointer
> >> cached in task_struct, and add new rt_mutex_update_top_task()
> >> to update its value, it can be called by rt_mutex_setprio()
> >> which held both owner's pi_lock and rq lock. Thus "pi_top_task"
> >> can be safely accessed by enqueue_task_dl() under rq lock.
> >>
> >> [XXX this next section is unparsable]
> > Yes, a bit hard to understand. However, am I correct in assuming this
> > patch and the previous one should fix this problem? Or are there still
> > other races causing issues?
> 
> Yes, these two patches can fix the problem.
> 
> >
> >> One problem is when rt_mutex_adjust_prio()->...->rt_mutex_setprio(),
> >> at that time rtmutex lock was released and owner was marked off,
> >> this can cause "pi_top_task" dereferenced to be a running one(as it
> >> can be falsely woken up by others before rt_mutex_setprio() is
> >> made to update "pi_top_task"). We solve this by directly calling
> >> __rt_mutex_adjust_prio() in mark_wakeup_next_waiter() which held
> >> pi_lock and rtmutex lock, and remove rt_mutex_adjust_prio(). Since
> >> now we moved the deboost point, in order to avoid current to be
> >> preempted due to deboost earlier before wake_up_q(), we also moved
> >> preempt_disable() before unlocking rtmutex.
> >>
> >> Cc: Steven Rostedt <rostedt@...dmis.org>
> >> Cc: Ingo Molnar <mingo@...hat.com>
> >> Cc: Juri Lelli <juri.lelli@....com>
> >> Originally-From: Peter Zijlstra <peterz@...radead.org>
> >> Signed-off-by: Xunlei Pang <xlpang@...hat.com>
> >> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> >> Link: http://lkml.kernel.org/r/1461659449-19497-2-git-send-email-xlpang@redhat.com
> > The idea of this fix makes sense to me. But, I would like to be able to
> > see the BUG and test the fix. What I have is a test in which I create N
> > DEADLINE workers that share a PI mutex. They get migrated around and
> > seem to stress PI code. But I couldn't hit the BUG yet. Maybe I let it
> > run for some more time.
> 
> You can use this reproducer attached(gcc crash_deadline_pi.c -lpthread -lrt ).
> Start multiple instances, then it will hit the bug very soon.
> 

Great, thanks! I'll use it.

Best,

- Juri

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ