[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALFYKtCVUcXisc5Yta-dfHMzk7C1zTj_JJSaOCFa60NeP=2tXg@mail.gmail.com>
Date: Thu, 31 Jul 2014 17:25:21 +0400
From: Ilya Dryomov <ilya.dryomov@...tank.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Ceph Development <ceph-devel@...r.kernel.org>,
davidlohr@...com, jason.low2@...com
Subject: Re: [PATCH] locking/mutexes: Revert "locking/mutexes: Add extra
reschedule point"
On Thu, Jul 31, 2014 at 5:13 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Thu, Jul 31, 2014 at 04:37:29PM +0400, Ilya Dryomov wrote:
>
>> This didn't make sense to me at first too, and I'll be happy to be
>> proven wrong, but we can reproduce this with rbd very reliably under
>> higher than usual load, and the revert makes it go away. What we are
>> seeing in the rbd scenario is the following.
>
> This is drivers/block/rbd.c ? I can find but a single mutex_lock() in
> there.
This is in net/ceph, include/linux/ceph.
Mutex A - struct ceph_osd_client::request_mutex, taken in alloc_msg(),
handle_timeout(), handle_osds_timeout(), ceph_osdc_start_request().
Mutex B - struct ceph_connection::mutex, taken in ceph_con_send().
dmesg with a sample dump of blocked tasks attached.
Basically everybody except kjournald:4398 is waiting for request_mutex,
which kjournald acquired in ceph_osdc_start_request(). kjournald
however itself sits waiting for ceph_connection::mutex, even though it
has been released.
>> Suppose foo needs mutexes A and B, bar needs mutex B. foo acquires
>> A and then wants to acquire B, but B is held by bar. foo spins
>> a little and ends up calling schedule_preempt_disabled() on line 484
>> above, but that call never returns, even though a hundred usecs later
>> bar releases B. foo ends up stuck in mutex_lock() indefinitely, but
>> still holds A and everybody else who needs A gets behind A. Given that
>> this A happens to be a central libceph mutex all rbd activity halts.
>> Deadlock may not be the best term for this, but never returning from
>> mutex_lock(&B) even though B has been unlocked is *a* problem.
>>
>> This obviously doesn't happen every time schedule_preempt_disabled() on
>> line 484 is called, so there must be some sort of race here. I'll send
>> along the actual rbd stack traces shortly.
>
> Smells like maybe current->state != TASK_RUNNING, does the below
> trigger?
>
> If so, you've wrecked something in whatever...
Trying it now.
Thanks,
Ilya
Download attachment "dmesg-11" of type "application/octet-stream" (80155 bytes)
Powered by blists - more mailing lists