[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANGgnMYVoP-Z0Bv-VDEkJnvfa7Fi4-zY2F4A0PhMewGvwo3VVw@mail.gmail.com>
Date: Thu, 26 Jun 2014 12:50:24 -0700
From: Austin Schuh <austin@...oton-tech.com>
To: Richard Weinberger <richard.weinberger@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Mike Galbraith <umgwanakikbuti@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
rt-users <linux-rt-users@...r.kernel.org>
Subject: Re: Filesystem lockup with CONFIG_PREEMPT_RT
On Wed, May 21, 2014 at 12:33 AM, Richard Weinberger
<richard.weinberger@...il.com> wrote:
> CC'ing RT folks
>
> On Wed, May 21, 2014 at 8:23 AM, Austin Schuh <austin@...oton-tech.com> wrote:
>> On Tue, May 13, 2014 at 7:29 PM, Austin Schuh <austin@...oton-tech.com> wrote:
>>> Hi,
>>>
>>> I am observing a filesystem lockup with XFS on a CONFIG_PREEMPT_RT
>>> patched kernel. I have currently only triggered it using dpkg. Dave
>>> Chinner on the XFS mailing list suggested that it was a rt-kernel
>>> workqueue issue as opposed to a XFS problem after looking at the
>>> kernel messages.
I've got a 100% reproducible test case that doesn't involve a
filesystem. I wrote a module that triggers the bug when the device is
written to, making it easy to enable tracing during the event and
capture everything.
It looks like rw_semaphores don't trigger wq_worker_sleeping to run
when work goes to sleep on a rw_semaphore. This only happens with the
RT patches, not with the mainline kernel. I'm foreseeing a second
deadlock/bug coming into play shortly. If a task holding the work
pool spinlock gets preempted, and we need to schedule more work from
another worker thread which was just blocked by a mutex, we'll then
end up trying to go to sleep on 2 locks at once.
That is getting a bit deep into the scheduler for me... Any
suggestions on how to fix it?
Austin
View attachment "killer_module.c" of type "text/x-csrc" (4183 bytes)
Powered by blists - more mailing lists